On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose@redhat.com> wrote:On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose@redhat.com> wrote:
...
then the commands I need to run would be:
gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export start
gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export gl01.localdomain.local:/gluste r/brick3/export commit force Correct?Yes, correct. gl01.localdomain.local should resolve correctly on all 3 nodes.It fails at first step:[root@ovirt01 ~]# gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export start volume reset-brick: failed: Cannot execute command. The cluster is operating at version 30712. reset-brick command reset-brick start is unavailable in this version.[root@ovirt01 ~]#It seems somehow in relation with this upgrade not of the commercial solution Red Hat Gluster StorageSo ti seems I have to run some command of type:gluster volume set all cluster.op-version XXXXXwith XXXXX > 30712It seems that latest version of commercial Red Hat Gluster Storage is 3.1 and its op-version is indeed 30712..So the question is which particular op-version I have to set and if the command can be set online without generating disruption....It should have worked with the glusterfs 3.10 version from Centos repo. Adding gluster-users for help on the op-version
Thanks,GianlucaIt seems op-version is not updated automatically by default, so that it can manage mixed versions while you update one by one...I followed what described here:- Get current version:[root@ovirt01 ~]# gluster volume get all cluster.op-versionOption Value------ -----cluster.op-version 30712[root@ovirt01 ~]#- Get maximum version I can set for current setup:[root@ovirt01 ~]# gluster volume get all cluster.max-op-versionOption Value------ -----cluster.max-op-version 31000[root@ovirt01 ~]#- Get op version information for all the connected clients:[root@ovirt01 ~]# gluster volume status all clients | grep ":49" | awk '{print $4}' | sort | uniq -c72 31000[root@ovirt01 ~]#--> ok- Update op-version[root@ovirt01 ~]# gluster volume set all cluster.op-version 31000volume set: success[root@ovirt01 ~]#- Verify:[root@ovirt01 ~]# gluster volume get all cluster.op-versionOption Value------ -----cluster.op-version 31000[root@ovirt01 ~]#--> ok[root@ovirt01 ~]# gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export start volume reset-brick: success: reset-brick start operation successful[root@ovirt01 ~]# gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export gl01.localdomain.local:/ gluster/brick3/export commit force volume reset-brick: failed: Commit failed on ovirt02.localdomain.local. Please check log file for details.Commit failed on ovirt03.localdomain.local. Please check log file for details.[root@ovirt01 ~]#[root@ovirt01 bricks]# gluster volume info exportVolume Name: exportType: ReplicateVolume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153 Status: StartedSnapshot Count: 0Number of Bricks: 1 x (2 + 1) = 3Transport-type: tcpBricks:Brick1: gl01.localdomain.local:/gluster/brick3/export Brick2: ovirt02.localdomain.local:/gluster/brick3/export Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter) Options Reconfigured:transport.address-family: inetperformance.readdir-ahead: onperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offcluster.eager-lock: enablenetwork.remote-dio: offcluster.quorum-type: autocluster.server-quorum-type: serverstorage.owner-uid: 36storage.owner-gid: 36features.shard: onfeatures.shard-block-size: 512MBperformance.low-prio-threads: 32cluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-wait-qlength: 10000cluster.shd-max-threads: 6network.ping-timeout: 30user.cifs: offnfs.disable: onperformance.strict-o-direct: on[root@ovirt01 bricks]# gluster volume reset-brick export ovirt02.localdomain.local:/gluster/brick3/export start volume reset-brick: success: reset-brick start operation successful[root@ovirt01 bricks]# gluster volume reset-brick export ovirt02.localdomain.local:/gluster/brick3/export gl02.localdomain.local:/ gluster/brick3/export commit force volume reset-brick: failed: Commit failed on localhost. Please check log file for details.[root@ovirt01 bricks]#I proceed (I have actually nothing on export volume...)[root@ovirt01 bricks]# gluster volume reset-brick export ovirt02.localdomain.local:/gluster/brick3/export start volume reset-brick: success: reset-brick start operation successful[root@ovirt01 bricks]# gluster volume reset-brick export ovirt02.localdomain.local:/gluster/brick3/export gl02.localdomain.local:/ gluster/brick3/export commit force volume reset-brick: failed: Commit failed on localhost. Please check log file for details.[root@ovirt01 bricks]#Again error[root@ovirt01 bricks]# gluster volume info exportVolume Name: exportType: ReplicateVolume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153 Status: StartedSnapshot Count: 0Number of Bricks: 0 x (2 + 1) = 2Transport-type: tcpBricks:Brick1: gl01.localdomain.local:/gluster/brick3/export Brick2: ovirt03.localdomain.local:/gluster/brick3/export Options Reconfigured:transport.address-family: inetperformance.readdir-ahead: onperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offcluster.eager-lock: enablenetwork.remote-dio: offcluster.quorum-type: autocluster.server-quorum-type: serverstorage.owner-uid: 36storage.owner-gid: 36features.shard: onfeatures.shard-block-size: 512MBperformance.low-prio-threads: 32cluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-wait-qlength: 10000cluster.shd-max-threads: 6network.ping-timeout: 30user.cifs: offnfs.disable: onperformance.strict-o-direct: on[root@ovirt01 bricks]#The last[root@ovirt01 bricks]# gluster volume reset-brick export ovirt03.localdomain.local:/gluster/brick3/export start volume reset-brick: success: reset-brick start operation successful[root@ovirt01 bricks]# gluster volume reset-brick export ovirt03.localdomain.local:/gluster/brick3/export gl03.localdomain.local:/ gluster/brick3/export commit force volume reset-brick: failed: Commit failed on localhost. Please check log file for details.[root@ovirt01 bricks]#again error[root@ovirt01 bricks]# gluster volume info exportVolume Name: exportType: ReplicateVolume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153 Status: StartedSnapshot Count: 0Number of Bricks: 0 x (2 + 1) = 1Transport-type: tcpBricks:Brick1: gl01.localdomain.local:/gluster/brick3/export Options Reconfigured:transport.address-family: inetperformance.readdir-ahead: onperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offcluster.eager-lock: enablenetwork.remote-dio: offcluster.quorum-type: autocluster.server-quorum-type: serverstorage.owner-uid: 36storage.owner-gid: 36features.shard: onfeatures.shard-block-size: 512MBperformance.low-prio-threads: 32cluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-wait-qlength: 10000cluster.shd-max-threads: 6network.ping-timeout: 30user.cifs: offnfs.disable: onperformance.strict-o-direct: on[root@ovirt01 bricks]#See here for gluster log in gzip format....The first command executed at 14:57 and the other two at 15:04This is what seen by oVirt right now for the volume(After the first command I saw 2 of 3 up)Gianluca
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users