Hi,Thank you for your fast reply :)2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand( HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa- 9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand( HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa- 9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core.vdsbroker.gluster. GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/ brick1' of volume 'e0f568fa-987c-4f5c-b853- 01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core.vdsbroker.gluster. GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/ brick2' of volume 'e0f568fa-987c-4f5c-b853- 01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core.vdsbroker.gluster. GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/ brick3' of volume 'e0f568fa-987c-4f5c-b853- 01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core.vdsbroker.gluster. GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/ brick1' of volume '68cfb061-1320-4042-abcd- 9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core.vdsbroker.gluster. GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/ brick2' of volume '68cfb061-1320-4042-abcd- 9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core.vdsbroker.gluster. GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/ brick3' of volume '68cfb061-1320-4042-abcd- 9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd- 9228da23c0c8=org.ovirt.engine. core.common.businessentities. gluster.GlusterVolumeEntity@ 7a6720d, e0f568fa-987c-4f5c-b853- 01bce718ee27=org.ovirt.engine. core.common.businessentities. gluster.GlusterVolumeEntity@ f88c521b}, log id: 738a7261 This happening continuously.Thanks!Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose <sabose@redhat.com> írta:Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:Dear Ovirt Users,I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3.I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node.I've checked in console, I think my gluster is not degraded:root@n1 ~]# gluster volume listvolume1volume2[root@n1 ~]# gluster volume infoVolume Name: volume1Type: Distributed-ReplicateVolume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: StartedSnapshot Count: 0Number of Bricks: 3 x 3 = 9Transport-type: tcpBricks:Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured:transport.address-family: inetperformance.readdir-ahead: onnfs.disable: onstorage.owner-uid: 36storage.owner-gid: 36performance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offperformance.low-prio-threads: 32network.remote-dio: enablecluster.eager-lock: enablecluster.quorum-type: autocluster.server-quorum-type: servercluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-max-threads: 8cluster.shd-wait-qlength: 10000features.shard: onuser.cifs: offserver.allow-insecure: onVolume Name: volume2Type: Distributed-ReplicateVolume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: StartedSnapshot Count: 0Number of Bricks: 3 x 3 = 9Transport-type: tcpBricks:Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured:nfs.disable: onperformance.readdir-ahead: ontransport.address-family: inetcluster.quorum-type: autonetwork.ping-timeout: 10auth.allow: *performance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offperformance.low-prio-threads: 32network.remote-dio: enablecluster.eager-lock: enablecluster.server-quorum-type: servercluster.data-self-heal-algorithm: full cluster.locking-scheme: granularcluster.shd-max-threads: 8cluster.shd-wait-qlength: 10000features.shard: onuser.cifs: offstorage.owner-uid: 36storage.owner-gid: 36server.allow-insecure: on[root@n1 ~]# gluster volume statusStatus of volume: volume1Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356Self-heal Daemon on 10.104.0.2 N/A N/A Y 962Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603Task Status of Volume volume1------------------------------------------------------------ ------------------ There are no active volume tasksStatus of volume: volume2Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356Self-heal Daemon on 10.104.0.2 N/A N/A Y 962Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603Task Status of Volume volume2------------------------------------------------------------ ------------------ There are no active volume tasksI think ovirt can't read valid informations about gluster.I can't contiune upgrade of other hosts until this problem exist.Please help me:)ThanksRegards,Tibor
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org