Dear Ovirt Users,I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3.I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node.I've checked in console, I think my gluster is not degraded:root@n1 ~]# gluster volume listvolume1volume2[root@n1 ~]# gluster volume infoVolume Name: volume1Type: Distributed-ReplicateVolume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: StartedSnapshot Count: 0Number of Bricks: 3 x 3 = 9Transport-type: tcpBricks:Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured:transport.address-family: inetperformance.readdir-ahead: onnfs.disable: onstorage.owner-uid: 36storage.owner-gid: 36performance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offperformance.low-prio-threads: 32network.remote-dio: enablecluster.eager-lock: enablecluster.quorum-type: autocluster.server-quorum-type: servercluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-max-threads: 8cluster.shd-wait-qlength: 10000features.shard: onuser.cifs: offserver.allow-insecure: onVolume Name: volume2Type: Distributed-ReplicateVolume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: StartedSnapshot Count: 0Number of Bricks: 3 x 3 = 9Transport-type: tcpBricks:Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured:nfs.disable: onperformance.readdir-ahead: ontransport.address-family: inetcluster.quorum-type: autonetwork.ping-timeout: 10auth.allow: *performance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offperformance.low-prio-threads: 32network.remote-dio: enablecluster.eager-lock: enablecluster.server-quorum-type: servercluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-max-threads: 8cluster.shd-wait-qlength: 10000features.shard: onuser.cifs: offstorage.owner-uid: 36storage.owner-gid: 36server.allow-insecure: on[root@n1 ~]# gluster volume statusStatus of volume: volume1Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356Self-heal Daemon on 10.104.0.2 N/A N/A Y 962Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603Task Status of Volume volume1------------------------------------------------------------ ------------------ There are no active volume tasksStatus of volume: volume2Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356Self-heal Daemon on 10.104.0.2 N/A N/A Y 962Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603Task Status of Volume volume2------------------------------------------------------------ ------------------ There are no active volume tasksI think ovirt can't read valid informations about gluster.I can't contiune upgrade of other hosts until this problem exist.Please help me:)ThanksRegards,Tibor
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org