
Dear Ovirt Users, I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3. I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node. I've checked in console, I think my gluster is not degraded: root@n1 ~]# gluster volume list volume1 volume2 [root@n1 ~]# gluster volume info Volume Name: volume1 Type: Distributed-Replicate Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on storage.owner-uid: 36 storage.owner-gid: 36 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on Volume Name: volume2 Type: Distributed-Replicate Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet cluster.quorum-type: auto network.ping-timeout: 10 auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on [root@n1 ~]# gluster volume status Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 Task Status of Volume volume1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: volume2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 Task Status of Volume volume2 ------------------------------------------------------------------------------ There are no active volume tasks I think ovirt can't read valid informations about gluster. I can't contiune upgrade of other hosts until this problem exist. Please help me:) Thanks Regards, Tibor

Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ? On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Dear Ovirt Users, I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3. I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node. I've checked in console, I think my gluster is not degraded:
root@n1 ~]# gluster volume list volume1 volume2 [root@n1 ~]# gluster volume info
Volume Name: volume1 Type: Distributed-Replicate Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on storage.owner-uid: 36 storage.owner-gid: 36 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on
Volume Name: volume2 Type: Distributed-Replicate Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet cluster.quorum-type: auto network.ping-timeout: 10 auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on [root@n1 ~]# gluster volume status Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume1 ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: volume2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume2 ------------------------------------------------------------ ------------------ There are no active volume tasks
I think ovirt can't read valid informations about gluster. I can't contiune upgrade of other hosts until this problem exist.
Please help me:)
Thanks
Regards,
Tibor
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

Hi, Thank you for your fast reply :) 2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261 This happening continuously. Thanks! Tibor ----- 2018. máj.. 10., 10:56, Sahina Bose <sabose@redhat.com> írta:
Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Dear Ovirt Users, I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3. I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node. I've checked in console, I think my gluster is not degraded:
root@n1 ~]# gluster volume list volume1 volume2 [root@n1 ~]# gluster volume info Volume Name: volume1 Type: Distributed-Replicate Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on storage.owner-uid: 36 storage.owner-gid: 36 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on Volume Name: volume2 Type: Distributed-Replicate Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet cluster.quorum-type: auto network.ping-timeout: 10 auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on [root@n1 ~]# gluster volume status Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 Task Status of Volume volume1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: volume2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 Task Status of Volume volume2 ------------------------------------------------------------------------------ There are no active volume tasks I think ovirt can't read valid informations about gluster. I can't contiune upgrade of other hosts until this problem exist.
Please help me:)
Thanks
Regards,
Tibor
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ]

This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume" On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Thank you for your fast reply :)
2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261
This happening continuously.
Thanks! Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose <sabose@redhat.com> írta:
Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Dear Ovirt Users, I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3. I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node. I've checked in console, I think my gluster is not degraded:
root@n1 ~]# gluster volume list volume1 volume2 [root@n1 ~]# gluster volume info
Volume Name: volume1 Type: Distributed-Replicate Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on storage.owner-uid: 36 storage.owner-gid: 36 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on
Volume Name: volume2 Type: Distributed-Replicate Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet cluster.quorum-type: auto network.ping-timeout: 10 auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on [root@n1 ~]# gluster volume status Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume1 ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: volume2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume2 ------------------------------------------------------------ ------------------ There are no active volume tasks
I think ovirt can't read valid informations about gluster. I can't contiune upgrade of other hosts until this problem exist.
Please help me:)
Thanks
Regards,
Tibor
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

Hi, I found this: 2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264 2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null R Tibor ----- 2018. máj.. 10., 11:43, Sahina Bose <sabose@redhat.com> írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Thank you for your fast reply :)
2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261
This happening continuously.
Thanks! Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Dear Ovirt Users, I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3. I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node. I've checked in console, I think my gluster is not degraded:
root@n1 ~]# gluster volume list volume1 volume2 [root@n1 ~]# gluster volume info Volume Name: volume1 Type: Distributed-Replicate Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on storage.owner-uid: 36 storage.owner-gid: 36 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on Volume Name: volume2 Type: Distributed-Replicate Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet cluster.quorum-type: auto network.ping-timeout: 10 auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on [root@n1 ~]# gluster volume status Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 Task Status of Volume volume1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: volume2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 Task Status of Volume volume2 ------------------------------------------------------------------------------ There are no active volume tasks I think ovirt can't read valid informations about gluster. I can't contiune upgrade of other hosts until this problem exist.
Please help me:)
Thanks
Regards,
Tibor
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ]

There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose <sabose@redhat.com> írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Thank you for your fast reply :)
2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261
This happening continuously.
Thanks! Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose <sabose@redhat.com> írta:
Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Dear Ovirt Users, I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3. I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node. I've checked in console, I think my gluster is not degraded:
root@n1 ~]# gluster volume list volume1 volume2 [root@n1 ~]# gluster volume info
Volume Name: volume1 Type: Distributed-Replicate Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on storage.owner-uid: 36 storage.owner-gid: 36 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on
Volume Name: volume2 Type: Distributed-Replicate Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet cluster.quorum-type: auto network.ping-timeout: 10 auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on [root@n1 ~]# gluster volume status Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume1 ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: volume2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume2 ------------------------------------------------------------ ------------------ There are no active volume tasks
I think ovirt can't read valid informations about gluster. I can't contiune upgrade of other hosts until this problem exist.
Please help me:)
Thanks
Regards,
Tibor
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

Hi, I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ? Thank you Regards, Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose <sabose@redhat.com> írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Thank you for your fast reply :)
2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261
This happening continuously.
Thanks! Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Dear Ovirt Users, I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3. I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node. I've checked in console, I think my gluster is not degraded:
root@n1 ~]# gluster volume list volume1 volume2 [root@n1 ~]# gluster volume info Volume Name: volume1 Type: Distributed-Replicate Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on storage.owner-uid: 36 storage.owner-gid: 36 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on Volume Name: volume2 Type: Distributed-Replicate Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet cluster.quorum-type: auto network.ping-timeout: 10 auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on [root@n1 ~]# gluster volume status Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 Task Status of Volume volume1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: volume2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 Task Status of Volume volume2 ------------------------------------------------------------------------------ There are no active volume tasks I think ovirt can't read valid informations about gluster. I can't contiune upgrade of other hosts until this problem exist.
Please help me:)
Thanks
Regards,
Tibor
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ]

Hi, Could someone help me please ? I can't finish my upgrade process. Thanks R Tibor ----- 2018. máj.. 10., 12:51, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?
Thank you
Regards,
Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose <sabose@redhat.com> írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Thank you for your fast reply :)
2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261
This happening continuously.
Thanks! Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
> Dear Ovirt Users, > I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 > system to 4.2.3. > I upgaded the first node with yum upgrade, it seems working now fine. But since > upgrade, the gluster informations seems to displayed incorrect on the admin > panel. The volume yellow, and there are red bricks from that node. > I've checked in console, I think my gluster is not degraded:
> root@n1 ~]# gluster volume list > volume1 > volume2 > [root@n1 ~]# gluster volume info > Volume Name: volume1 > Type: Distributed-Replicate > Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x 3 = 9 > Transport-type: tcp > Bricks: > Brick1: 10.104.0.1:/gluster/brick/brick1 > Brick2: 10.104.0.2:/gluster/brick/brick1 > Brick3: 10.104.0.3:/gluster/brick/brick1 > Brick4: 10.104.0.1:/gluster/brick/brick2 > Brick5: 10.104.0.2:/gluster/brick/brick2 > Brick6: 10.104.0.3:/gluster/brick/brick2 > Brick7: 10.104.0.1:/gluster/brick/brick3 > Brick8: 10.104.0.2:/gluster/brick/brick3 > Brick9: 10.104.0.3:/gluster/brick/brick3 > Options Reconfigured: > transport.address-family: inet > performance.readdir-ahead: on > nfs.disable: on > storage.owner-uid: 36 > storage.owner-gid: 36 > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > performance.low-prio-threads: 32 > network.remote-dio: enable > cluster.eager-lock: enable > cluster.quorum-type: auto > cluster.server-quorum-type: server > cluster.data-self-heal-algorithm: full > cluster.locking-scheme: granular > cluster.shd-max-threads: 8 > cluster.shd-wait-qlength: 10000 > features.shard: on > user.cifs: off > server.allow-insecure: on > Volume Name: volume2 > Type: Distributed-Replicate > Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x 3 = 9 > Transport-type: tcp > Bricks: > Brick1: 10.104.0.1:/gluster2/brick/brick1 > Brick2: 10.104.0.2:/gluster2/brick/brick1 > Brick3: 10.104.0.3:/gluster2/brick/brick1 > Brick4: 10.104.0.1:/gluster2/brick/brick2 > Brick5: 10.104.0.2:/gluster2/brick/brick2 > Brick6: 10.104.0.3:/gluster2/brick/brick2 > Brick7: 10.104.0.1:/gluster2/brick/brick3 > Brick8: 10.104.0.2:/gluster2/brick/brick3 > Brick9: 10.104.0.3:/gluster2/brick/brick3 > Options Reconfigured: > nfs.disable: on > performance.readdir-ahead: on > transport.address-family: inet > cluster.quorum-type: auto > network.ping-timeout: 10 > auth.allow: * > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > performance.low-prio-threads: 32 > network.remote-dio: enable > cluster.eager-lock: enable > cluster.server-quorum-type: server > cluster.data-self-heal-algorithm: full > cluster.locking-scheme: granular > cluster.shd-max-threads: 8 > cluster.shd-wait-qlength: 10000 > features.shard: on > user.cifs: off > storage.owner-uid: 36 > storage.owner-gid: 36 > server.allow-insecure: on > [root@n1 ~]# gluster volume status > Status of volume: volume1 > Gluster process TCP Port RDMA Port Online Pid > ------------------------------------------------------------------------------ > Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 > Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 > Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 > Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 > Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 > Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 > Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 > Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 > Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 > Self-heal Daemon on localhost N/A N/A Y 54356 > Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 > Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 > Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 > Task Status of Volume volume1 > ------------------------------------------------------------------------------ > There are no active volume tasks > Status of volume: volume2 > Gluster process TCP Port RDMA Port Online Pid > ------------------------------------------------------------------------------ > Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 > Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 > Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 > Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 > Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 > Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 > Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 > Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 > Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 > Self-heal Daemon on localhost N/A N/A Y 54356 > Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 > Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 > Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 > Task Status of Volume volume2 > ------------------------------------------------------------------------------ > There are no active volume tasks > I think ovirt can't read valid informations about gluster. > I can't contiune upgrade of other hosts until this problem exist.
> Please help me:)
> Thanks
> Regards,
> Tibor
> _______________________________________________ > Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] > To unsubscribe send an email to [ mailto:users-leave@ovirt.org | > users-leave@ovirt.org ]
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

The two key errors I'd investigate are these... 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'
2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster. GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null
I'd start with that first one. Is the network/interface group of your storage layer actually defined as a Gluster & Migration network within oVirt? On 12 May 2018 at 03:44, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Could someone help me please ? I can't finish my upgrade process.
Thanks R Tibor
----- 2018. máj.. 10., 12:51, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?
Thank you
Regards,
Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose <sabose@redhat.com> írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose <sabose@redhat.com> írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Thank you for your fast reply :)
2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261
This happening continuously.
Thanks! Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose <sabose@redhat.com> írta:
Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Dear Ovirt Users, I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3. I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node. I've checked in console, I think my gluster is not degraded:
root@n1 ~]# gluster volume list volume1 volume2 [root@n1 ~]# gluster volume info
Volume Name: volume1 Type: Distributed-Replicate Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on storage.owner-uid: 36 storage.owner-gid: 36 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on
Volume Name: volume2 Type: Distributed-Replicate Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet cluster.quorum-type: auto network.ping-timeout: 10 auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on [root@n1 ~]# gluster volume status Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume1 ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: volume2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume2 ------------------------------------------------------------ ------------------ There are no active volume tasks
I think ovirt can't read valid informations about gluster. I can't contiune upgrade of other hosts until this problem exist.
Please help me:)
Thanks
Regards,
Tibor
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
-- Doug

Hi, Yes, I have a gluster network, but it's "funny" because that is the 10.105.0.x/24. :( Also, the n4.itsmart.cloud is mean 10.104.0.4. The 10.104.0.x/24 is my ovirtmgmt network. However, the 10.104.0.x is accessable from all hosts. What should I do? Thanks, R Tibor ----- 2018. máj.. 12., 17:17, Doug Ingham <dougti@gmail.com> írta:
The two key errors I'd investigate are these...
2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'
2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null
I'd start with that first one. Is the network/interface group of your storage layer actually defined as a Gluster & Migration network within oVirt?
On 12 May 2018 at 03:44, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Could someone help me please ? I can't finish my upgrade process.
Thanks R Tibor
----- 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta:
Hi,
I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?
Thank you
Regards,
Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
> Hi,
> Thank you for your fast reply :)
> 2018-05-10 11:01:51,574+02 INFO > [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] START, > GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, > VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), > log id: 39adbbb8 > 2018-05-10 11:01:51,768+02 INFO > [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, > return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , > n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log > id: 39adbbb8 > 2018-05-10 11:01:51,788+02 INFO > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] START, > GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, > GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), > log id: 738a7261 > 2018-05-10 11:01:51,892+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster/brick/brick1' of volume > 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,898+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster/brick/brick2' of volume > 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,905+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster/brick/brick3' of volume > 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,911+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster2/brick/brick1' of volume > '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,917+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster2/brick/brick2' of volume > '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,924+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster2/brick/brick3' of volume > '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,925+02 INFO > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, > return: > {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, > e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, > log id: 738a7261
> This happening continuously.
> Thanks! > Tibor
> ----- 2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com | > sabose@redhat.com ] > írta:
>> Could you check the engine.log if there are errors related to getting >> GlusterVolumeAdvancedDetails ?
>> On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >> tdemeter@itsmart.hu ] > wrote:
>>> Dear Ovirt Users, >>> I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 >>> system to 4.2.3. >>> I upgaded the first node with yum upgrade, it seems working now fine. But since >>> upgrade, the gluster informations seems to displayed incorrect on the admin >>> panel. The volume yellow, and there are red bricks from that node. >>> I've checked in console, I think my gluster is not degraded:
>>> root@n1 ~]# gluster volume list >>> volume1 >>> volume2 >>> [root@n1 ~]# gluster volume info >>> Volume Name: volume1 >>> Type: Distributed-Replicate >>> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 >>> Status: Started >>> Snapshot Count: 0 >>> Number of Bricks: 3 x 3 = 9 >>> Transport-type: tcp >>> Bricks: >>> Brick1: 10.104.0.1:/gluster/brick/brick1 >>> Brick2: 10.104.0.2:/gluster/brick/brick1 >>> Brick3: 10.104.0.3:/gluster/brick/brick1 >>> Brick4: 10.104.0.1:/gluster/brick/brick2 >>> Brick5: 10.104.0.2:/gluster/brick/brick2 >>> Brick6: 10.104.0.3:/gluster/brick/brick2 >>> Brick7: 10.104.0.1:/gluster/brick/brick3 >>> Brick8: 10.104.0.2:/gluster/brick/brick3 >>> Brick9: 10.104.0.3:/gluster/brick/brick3 >>> Options Reconfigured: >>> transport.address-family: inet >>> performance.readdir-ahead: on >>> nfs.disable: on >>> storage.owner-uid: 36 >>> storage.owner-gid: 36 >>> performance.quick-read: off >>> performance.read-ahead: off >>> performance.io-cache: off >>> performance.stat-prefetch: off >>> performance.low-prio-threads: 32 >>> network.remote-dio: enable >>> cluster.eager-lock: enable >>> cluster.quorum-type: auto >>> cluster.server-quorum-type: server >>> cluster.data-self-heal-algorithm: full >>> cluster.locking-scheme: granular >>> cluster.shd-max-threads: 8 >>> cluster.shd-wait-qlength: 10000 >>> features.shard: on >>> user.cifs: off >>> server.allow-insecure: on >>> Volume Name: volume2 >>> Type: Distributed-Replicate >>> Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 >>> Status: Started >>> Snapshot Count: 0 >>> Number of Bricks: 3 x 3 = 9 >>> Transport-type: tcp >>> Bricks: >>> Brick1: 10.104.0.1:/gluster2/brick/brick1 >>> Brick2: 10.104.0.2:/gluster2/brick/brick1 >>> Brick3: 10.104.0.3:/gluster2/brick/brick1 >>> Brick4: 10.104.0.1:/gluster2/brick/brick2 >>> Brick5: 10.104.0.2:/gluster2/brick/brick2 >>> Brick6: 10.104.0.3:/gluster2/brick/brick2 >>> Brick7: 10.104.0.1:/gluster2/brick/brick3 >>> Brick8: 10.104.0.2:/gluster2/brick/brick3 >>> Brick9: 10.104.0.3:/gluster2/brick/brick3 >>> Options Reconfigured: >>> nfs.disable: on >>> performance.readdir-ahead: on >>> transport.address-family: inet >>> cluster.quorum-type: auto >>> network.ping-timeout: 10 >>> auth.allow: * >>> performance.quick-read: off >>> performance.read-ahead: off >>> performance.io-cache: off >>> performance.stat-prefetch: off >>> performance.low-prio-threads: 32 >>> network.remote-dio: enable >>> cluster.eager-lock: enable >>> cluster.server-quorum-type: server >>> cluster.data-self-heal-algorithm: full >>> cluster.locking-scheme: granular >>> cluster.shd-max-threads: 8 >>> cluster.shd-wait-qlength: 10000 >>> features.shard: on >>> user.cifs: off >>> storage.owner-uid: 36 >>> storage.owner-gid: 36 >>> server.allow-insecure: on >>> [root@n1 ~]# gluster volume status >>> Status of volume: volume1 >>> Gluster process TCP Port RDMA Port Online Pid >>> ------------------------------------------------------------------------------ >>> Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 >>> Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 >>> Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 >>> Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 >>> Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 >>> Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 >>> Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 >>> Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 >>> Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 >>> Self-heal Daemon on localhost N/A N/A Y 54356 >>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>> Task Status of Volume volume1 >>> ------------------------------------------------------------------------------ >>> There are no active volume tasks >>> Status of volume: volume2 >>> Gluster process TCP Port RDMA Port Online Pid >>> ------------------------------------------------------------------------------ >>> Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 >>> Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 >>> Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 >>> Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 >>> Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 >>> Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 >>> Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 >>> Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 >>> Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 >>> Self-heal Daemon on localhost N/A N/A Y 54356 >>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>> Task Status of Volume volume2 >>> ------------------------------------------------------------------------------ >>> There are no active volume tasks >>> I think ovirt can't read valid informations about gluster. >>> I can't contiune upgrade of other hosts until this problem exist.
>>> Please help me:)
>>> Thanks
>>> Regards,
>>> Tibor
>>> _______________________________________________ >>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>> users-leave@ovirt.org ]
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ]
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ]
-- Doug
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

Meanwhile i just changed my gluster network to 10.104.0.0/24 but does not happend anything. Regards, Tibor ----- 2018. máj.. 14., 9:49, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
Yes, I have a gluster network, but it's "funny" because that is the 10.105.0.x/24. :( Also, the n4.itsmart.cloud is mean 10.104.0.4. The 10.104.0.x/24 is my ovirtmgmt network.
However, the 10.104.0.x is accessable from all hosts.
What should I do?
Thanks,
R
Tibor
----- 2018. máj.. 12., 17:17, Doug Ingham <dougti@gmail.com> írta:
The two key errors I'd investigate are these...
2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'
2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null
I'd start with that first one. Is the network/interface group of your storage layer actually defined as a Gluster & Migration network within oVirt?
On 12 May 2018 at 03:44, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Could someone help me please ? I can't finish my upgrade process.
Thanks R Tibor
----- 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta:
Hi,
I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?
Thank you
Regards,
Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
> This doesn't affect the monitoring of state. > Any errors in vdsm.log? > Or errors in engine.log of the form "Error while refreshing brick statuses for > volume"
> On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | > tdemeter@itsmart.hu ] > wrote:
>> Hi,
>> Thank you for your fast reply :)
>> 2018-05-10 11:01:51,574+02 INFO >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> (DefaultQuartzScheduler6) [7f01fc2d] START, >> GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, >> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >> log id: 39adbbb8 >> 2018-05-10 11:01:51,768+02 INFO >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, >> return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , >> n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log >> id: 39adbbb8 >> 2018-05-10 11:01:51,788+02 INFO >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> (DefaultQuartzScheduler6) [7f01fc2d] START, >> GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, >> GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >> log id: 738a7261 >> 2018-05-10 11:01:51,892+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster/brick/brick1' of volume >> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,898+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster/brick/brick2' of volume >> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,905+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster/brick/brick3' of volume >> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,911+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster2/brick/brick1' of volume >> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,917+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster2/brick/brick2' of volume >> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,924+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster2/brick/brick3' of volume >> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,925+02 INFO >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, >> return: >> {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, >> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, >> log id: 738a7261
>> This happening continuously.
>> Thanks! >> Tibor
>> ----- 2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com | >> sabose@redhat.com ] > írta:
>>> Could you check the engine.log if there are errors related to getting >>> GlusterVolumeAdvancedDetails ?
>>> On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>> tdemeter@itsmart.hu ] > wrote:
>>>> Dear Ovirt Users, >>>> I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 >>>> system to 4.2.3. >>>> I upgaded the first node with yum upgrade, it seems working now fine. But since >>>> upgrade, the gluster informations seems to displayed incorrect on the admin >>>> panel. The volume yellow, and there are red bricks from that node. >>>> I've checked in console, I think my gluster is not degraded:
>>>> root@n1 ~]# gluster volume list >>>> volume1 >>>> volume2 >>>> [root@n1 ~]# gluster volume info >>>> Volume Name: volume1 >>>> Type: Distributed-Replicate >>>> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 >>>> Status: Started >>>> Snapshot Count: 0 >>>> Number of Bricks: 3 x 3 = 9 >>>> Transport-type: tcp >>>> Bricks: >>>> Brick1: 10.104.0.1:/gluster/brick/brick1 >>>> Brick2: 10.104.0.2:/gluster/brick/brick1 >>>> Brick3: 10.104.0.3:/gluster/brick/brick1 >>>> Brick4: 10.104.0.1:/gluster/brick/brick2 >>>> Brick5: 10.104.0.2:/gluster/brick/brick2 >>>> Brick6: 10.104.0.3:/gluster/brick/brick2 >>>> Brick7: 10.104.0.1:/gluster/brick/brick3 >>>> Brick8: 10.104.0.2:/gluster/brick/brick3 >>>> Brick9: 10.104.0.3:/gluster/brick/brick3 >>>> Options Reconfigured: >>>> transport.address-family: inet >>>> performance.readdir-ahead: on >>>> nfs.disable: on >>>> storage.owner-uid: 36 >>>> storage.owner-gid: 36 >>>> performance.quick-read: off >>>> performance.read-ahead: off >>>> performance.io-cache: off >>>> performance.stat-prefetch: off >>>> performance.low-prio-threads: 32 >>>> network.remote-dio: enable >>>> cluster.eager-lock: enable >>>> cluster.quorum-type: auto >>>> cluster.server-quorum-type: server >>>> cluster.data-self-heal-algorithm: full >>>> cluster.locking-scheme: granular >>>> cluster.shd-max-threads: 8 >>>> cluster.shd-wait-qlength: 10000 >>>> features.shard: on >>>> user.cifs: off >>>> server.allow-insecure: on >>>> Volume Name: volume2 >>>> Type: Distributed-Replicate >>>> Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 >>>> Status: Started >>>> Snapshot Count: 0 >>>> Number of Bricks: 3 x 3 = 9 >>>> Transport-type: tcp >>>> Bricks: >>>> Brick1: 10.104.0.1:/gluster2/brick/brick1 >>>> Brick2: 10.104.0.2:/gluster2/brick/brick1 >>>> Brick3: 10.104.0.3:/gluster2/brick/brick1 >>>> Brick4: 10.104.0.1:/gluster2/brick/brick2 >>>> Brick5: 10.104.0.2:/gluster2/brick/brick2 >>>> Brick6: 10.104.0.3:/gluster2/brick/brick2 >>>> Brick7: 10.104.0.1:/gluster2/brick/brick3 >>>> Brick8: 10.104.0.2:/gluster2/brick/brick3 >>>> Brick9: 10.104.0.3:/gluster2/brick/brick3 >>>> Options Reconfigured: >>>> nfs.disable: on >>>> performance.readdir-ahead: on >>>> transport.address-family: inet >>>> cluster.quorum-type: auto >>>> network.ping-timeout: 10 >>>> auth.allow: * >>>> performance.quick-read: off >>>> performance.read-ahead: off >>>> performance.io-cache: off >>>> performance.stat-prefetch: off >>>> performance.low-prio-threads: 32 >>>> network.remote-dio: enable >>>> cluster.eager-lock: enable >>>> cluster.server-quorum-type: server >>>> cluster.data-self-heal-algorithm: full >>>> cluster.locking-scheme: granular >>>> cluster.shd-max-threads: 8 >>>> cluster.shd-wait-qlength: 10000 >>>> features.shard: on >>>> user.cifs: off >>>> storage.owner-uid: 36 >>>> storage.owner-gid: 36 >>>> server.allow-insecure: on >>>> [root@n1 ~]# gluster volume status >>>> Status of volume: volume1 >>>> Gluster process TCP Port RDMA Port Online Pid >>>> ------------------------------------------------------------------------------ >>>> Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 >>>> Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 >>>> Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 >>>> Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 >>>> Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 >>>> Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 >>>> Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 >>>> Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 >>>> Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 >>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>> Task Status of Volume volume1 >>>> ------------------------------------------------------------------------------ >>>> There are no active volume tasks >>>> Status of volume: volume2 >>>> Gluster process TCP Port RDMA Port Online Pid >>>> ------------------------------------------------------------------------------ >>>> Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 >>>> Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 >>>> Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 >>>> Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 >>>> Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 >>>> Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 >>>> Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 >>>> Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 >>>> Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 >>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>> Task Status of Volume volume2 >>>> ------------------------------------------------------------------------------ >>>> There are no active volume tasks >>>> I think ovirt can't read valid informations about gluster. >>>> I can't contiune upgrade of other hosts until this problem exist.
>>>> Please help me:)
>>>> Thanks
>>>> Regards,
>>>> Tibor
>>>> _______________________________________________ >>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>> users-leave@ovirt.org ]
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ]
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ]
-- Doug
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Could someone help me please ? I can't finish my upgrade process.
https://gerrit.ovirt.org/91164 should fix the error you're facing. Can you elaborate why this is affecting the upgrade process?
Thanks R Tibor
----- 2018. máj.. 10., 12:51, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?
Thank you
Regards,
Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose <sabose@redhat.com> írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose <sabose@redhat.com> írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Thank you for your fast reply :)
2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261
This happening continuously.
Thanks! Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose <sabose@redhat.com> írta:
Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Dear Ovirt Users, I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3. I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node. I've checked in console, I think my gluster is not degraded:
root@n1 ~]# gluster volume list volume1 volume2 [root@n1 ~]# gluster volume info
Volume Name: volume1 Type: Distributed-Replicate Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on storage.owner-uid: 36 storage.owner-gid: 36 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on
Volume Name: volume2 Type: Distributed-Replicate Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet cluster.quorum-type: auto network.ping-timeout: 10 auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on [root@n1 ~]# gluster volume status Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume1 ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: volume2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume2 ------------------------------------------------------------ ------------------ There are no active volume tasks
I think ovirt can't read valid informations about gluster. I can't contiune upgrade of other hosts until this problem exist.
Please help me:)
Thanks
Regards,
Tibor
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

Hi, Sorry for my question, but can you tell me please how can I use this patch? Thanks, Regards, Tibor ----- 2018. máj.. 14., 10:47, Sahina Bose <sabose@redhat.com> írta:
On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Could someone help me please ? I can't finish my upgrade process.
[ https://gerrit.ovirt.org/91164 | https://gerrit.ovirt.org/91164 ] should fix the error you're facing.
Can you elaborate why this is affecting the upgrade process?
Thanks R Tibor
----- 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta:
Hi,
I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?
Thank you
Regards,
Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
> Hi,
> Thank you for your fast reply :)
> 2018-05-10 11:01:51,574+02 INFO > [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] START, > GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, > VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), > log id: 39adbbb8 > 2018-05-10 11:01:51,768+02 INFO > [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, > return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , > n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log > id: 39adbbb8 > 2018-05-10 11:01:51,788+02 INFO > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] START, > GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, > GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), > log id: 738a7261 > 2018-05-10 11:01:51,892+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster/brick/brick1' of volume > 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,898+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster/brick/brick2' of volume > 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,905+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster/brick/brick3' of volume > 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,911+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster2/brick/brick1' of volume > '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,917+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster2/brick/brick2' of volume > '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,924+02 WARN > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick > '10.104.0.1:/gluster2/brick/brick3' of volume > '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster > network found in cluster '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,925+02 INFO > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, > return: > {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, > e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, > log id: 738a7261
> This happening continuously.
> Thanks! > Tibor
> ----- 2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com | > sabose@redhat.com ] > írta:
>> Could you check the engine.log if there are errors related to getting >> GlusterVolumeAdvancedDetails ?
>> On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >> tdemeter@itsmart.hu ] > wrote:
>>> Dear Ovirt Users, >>> I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 >>> system to 4.2.3. >>> I upgaded the first node with yum upgrade, it seems working now fine. But since >>> upgrade, the gluster informations seems to displayed incorrect on the admin >>> panel. The volume yellow, and there are red bricks from that node. >>> I've checked in console, I think my gluster is not degraded:
>>> root@n1 ~]# gluster volume list >>> volume1 >>> volume2 >>> [root@n1 ~]# gluster volume info >>> Volume Name: volume1 >>> Type: Distributed-Replicate >>> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 >>> Status: Started >>> Snapshot Count: 0 >>> Number of Bricks: 3 x 3 = 9 >>> Transport-type: tcp >>> Bricks: >>> Brick1: 10.104.0.1:/gluster/brick/brick1 >>> Brick2: 10.104.0.2:/gluster/brick/brick1 >>> Brick3: 10.104.0.3:/gluster/brick/brick1 >>> Brick4: 10.104.0.1:/gluster/brick/brick2 >>> Brick5: 10.104.0.2:/gluster/brick/brick2 >>> Brick6: 10.104.0.3:/gluster/brick/brick2 >>> Brick7: 10.104.0.1:/gluster/brick/brick3 >>> Brick8: 10.104.0.2:/gluster/brick/brick3 >>> Brick9: 10.104.0.3:/gluster/brick/brick3 >>> Options Reconfigured: >>> transport.address-family: inet >>> performance.readdir-ahead: on >>> nfs.disable: on >>> storage.owner-uid: 36 >>> storage.owner-gid: 36 >>> performance.quick-read: off >>> performance.read-ahead: off >>> performance.io-cache: off >>> performance.stat-prefetch: off >>> performance.low-prio-threads: 32 >>> network.remote-dio: enable >>> cluster.eager-lock: enable >>> cluster.quorum-type: auto >>> cluster.server-quorum-type: server >>> cluster.data-self-heal-algorithm: full >>> cluster.locking-scheme: granular >>> cluster.shd-max-threads: 8 >>> cluster.shd-wait-qlength: 10000 >>> features.shard: on >>> user.cifs: off >>> server.allow-insecure: on >>> Volume Name: volume2 >>> Type: Distributed-Replicate >>> Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 >>> Status: Started >>> Snapshot Count: 0 >>> Number of Bricks: 3 x 3 = 9 >>> Transport-type: tcp >>> Bricks: >>> Brick1: 10.104.0.1:/gluster2/brick/brick1 >>> Brick2: 10.104.0.2:/gluster2/brick/brick1 >>> Brick3: 10.104.0.3:/gluster2/brick/brick1 >>> Brick4: 10.104.0.1:/gluster2/brick/brick2 >>> Brick5: 10.104.0.2:/gluster2/brick/brick2 >>> Brick6: 10.104.0.3:/gluster2/brick/brick2 >>> Brick7: 10.104.0.1:/gluster2/brick/brick3 >>> Brick8: 10.104.0.2:/gluster2/brick/brick3 >>> Brick9: 10.104.0.3:/gluster2/brick/brick3 >>> Options Reconfigured: >>> nfs.disable: on >>> performance.readdir-ahead: on >>> transport.address-family: inet >>> cluster.quorum-type: auto >>> network.ping-timeout: 10 >>> auth.allow: * >>> performance.quick-read: off >>> performance.read-ahead: off >>> performance.io-cache: off >>> performance.stat-prefetch: off >>> performance.low-prio-threads: 32 >>> network.remote-dio: enable >>> cluster.eager-lock: enable >>> cluster.server-quorum-type: server >>> cluster.data-self-heal-algorithm: full >>> cluster.locking-scheme: granular >>> cluster.shd-max-threads: 8 >>> cluster.shd-wait-qlength: 10000 >>> features.shard: on >>> user.cifs: off >>> storage.owner-uid: 36 >>> storage.owner-gid: 36 >>> server.allow-insecure: on >>> [root@n1 ~]# gluster volume status >>> Status of volume: volume1 >>> Gluster process TCP Port RDMA Port Online Pid >>> ------------------------------------------------------------------------------ >>> Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 >>> Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 >>> Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 >>> Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 >>> Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 >>> Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 >>> Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 >>> Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 >>> Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 >>> Self-heal Daemon on localhost N/A N/A Y 54356 >>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>> Task Status of Volume volume1 >>> ------------------------------------------------------------------------------ >>> There are no active volume tasks >>> Status of volume: volume2 >>> Gluster process TCP Port RDMA Port Online Pid >>> ------------------------------------------------------------------------------ >>> Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 >>> Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 >>> Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 >>> Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 >>> Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 >>> Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 >>> Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 >>> Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 >>> Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 >>> Self-heal Daemon on localhost N/A N/A Y 54356 >>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>> Task Status of Volume volume2 >>> ------------------------------------------------------------------------------ >>> There are no active volume tasks >>> I think ovirt can't read valid informations about gluster. >>> I can't contiune upgrade of other hosts until this problem exist.
>>> Please help me:)
>>> Thanks
>>> Regards,
>>> Tibor
>>> _______________________________________________ >>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>> users-leave@ovirt.org ]
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ]

Hi, Could you explain how can I use this patch? R, Tibor ----- 2018. máj.. 14., 11:18, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
Sorry for my question, but can you tell me please how can I use this patch?
Thanks, Regards, Tibor ----- 2018. máj.. 14., 10:47, Sahina Bose <sabose@redhat.com> írta:
On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Could someone help me please ? I can't finish my upgrade process.
[ https://gerrit.ovirt.org/91164 | https://gerrit.ovirt.org/91164 ] should fix the error you're facing.
Can you elaborate why this is affecting the upgrade process?
Thanks R Tibor
----- 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta:
Hi,
I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?
Thank you
Regards,
Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
> This doesn't affect the monitoring of state. > Any errors in vdsm.log? > Or errors in engine.log of the form "Error while refreshing brick statuses for > volume"
> On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | > tdemeter@itsmart.hu ] > wrote:
>> Hi,
>> Thank you for your fast reply :)
>> 2018-05-10 11:01:51,574+02 INFO >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> (DefaultQuartzScheduler6) [7f01fc2d] START, >> GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, >> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >> log id: 39adbbb8 >> 2018-05-10 11:01:51,768+02 INFO >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, >> return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , >> n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log >> id: 39adbbb8 >> 2018-05-10 11:01:51,788+02 INFO >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> (DefaultQuartzScheduler6) [7f01fc2d] START, >> GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, >> GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >> log id: 738a7261 >> 2018-05-10 11:01:51,892+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster/brick/brick1' of volume >> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,898+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster/brick/brick2' of volume >> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,905+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster/brick/brick3' of volume >> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,911+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster2/brick/brick1' of volume >> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,917+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster2/brick/brick2' of volume >> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,924+02 WARN >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >> '10.104.0.1:/gluster2/brick/brick3' of volume >> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >> network found in cluster '59c10db3-0324-0320-0120-000000000339' >> 2018-05-10 11:01:51,925+02 INFO >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, >> return: >> {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, >> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, >> log id: 738a7261
>> This happening continuously.
>> Thanks! >> Tibor
>> ----- 2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com | >> sabose@redhat.com ] > írta:
>>> Could you check the engine.log if there are errors related to getting >>> GlusterVolumeAdvancedDetails ?
>>> On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>> tdemeter@itsmart.hu ] > wrote:
>>>> Dear Ovirt Users, >>>> I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 >>>> system to 4.2.3. >>>> I upgaded the first node with yum upgrade, it seems working now fine. But since >>>> upgrade, the gluster informations seems to displayed incorrect on the admin >>>> panel. The volume yellow, and there are red bricks from that node. >>>> I've checked in console, I think my gluster is not degraded:
>>>> root@n1 ~]# gluster volume list >>>> volume1 >>>> volume2 >>>> [root@n1 ~]# gluster volume info >>>> Volume Name: volume1 >>>> Type: Distributed-Replicate >>>> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 >>>> Status: Started >>>> Snapshot Count: 0 >>>> Number of Bricks: 3 x 3 = 9 >>>> Transport-type: tcp >>>> Bricks: >>>> Brick1: 10.104.0.1:/gluster/brick/brick1 >>>> Brick2: 10.104.0.2:/gluster/brick/brick1 >>>> Brick3: 10.104.0.3:/gluster/brick/brick1 >>>> Brick4: 10.104.0.1:/gluster/brick/brick2 >>>> Brick5: 10.104.0.2:/gluster/brick/brick2 >>>> Brick6: 10.104.0.3:/gluster/brick/brick2 >>>> Brick7: 10.104.0.1:/gluster/brick/brick3 >>>> Brick8: 10.104.0.2:/gluster/brick/brick3 >>>> Brick9: 10.104.0.3:/gluster/brick/brick3 >>>> Options Reconfigured: >>>> transport.address-family: inet >>>> performance.readdir-ahead: on >>>> nfs.disable: on >>>> storage.owner-uid: 36 >>>> storage.owner-gid: 36 >>>> performance.quick-read: off >>>> performance.read-ahead: off >>>> performance.io-cache: off >>>> performance.stat-prefetch: off >>>> performance.low-prio-threads: 32 >>>> network.remote-dio: enable >>>> cluster.eager-lock: enable >>>> cluster.quorum-type: auto >>>> cluster.server-quorum-type: server >>>> cluster.data-self-heal-algorithm: full >>>> cluster.locking-scheme: granular >>>> cluster.shd-max-threads: 8 >>>> cluster.shd-wait-qlength: 10000 >>>> features.shard: on >>>> user.cifs: off >>>> server.allow-insecure: on >>>> Volume Name: volume2 >>>> Type: Distributed-Replicate >>>> Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 >>>> Status: Started >>>> Snapshot Count: 0 >>>> Number of Bricks: 3 x 3 = 9 >>>> Transport-type: tcp >>>> Bricks: >>>> Brick1: 10.104.0.1:/gluster2/brick/brick1 >>>> Brick2: 10.104.0.2:/gluster2/brick/brick1 >>>> Brick3: 10.104.0.3:/gluster2/brick/brick1 >>>> Brick4: 10.104.0.1:/gluster2/brick/brick2 >>>> Brick5: 10.104.0.2:/gluster2/brick/brick2 >>>> Brick6: 10.104.0.3:/gluster2/brick/brick2 >>>> Brick7: 10.104.0.1:/gluster2/brick/brick3 >>>> Brick8: 10.104.0.2:/gluster2/brick/brick3 >>>> Brick9: 10.104.0.3:/gluster2/brick/brick3 >>>> Options Reconfigured: >>>> nfs.disable: on >>>> performance.readdir-ahead: on >>>> transport.address-family: inet >>>> cluster.quorum-type: auto >>>> network.ping-timeout: 10 >>>> auth.allow: * >>>> performance.quick-read: off >>>> performance.read-ahead: off >>>> performance.io-cache: off >>>> performance.stat-prefetch: off >>>> performance.low-prio-threads: 32 >>>> network.remote-dio: enable >>>> cluster.eager-lock: enable >>>> cluster.server-quorum-type: server >>>> cluster.data-self-heal-algorithm: full >>>> cluster.locking-scheme: granular >>>> cluster.shd-max-threads: 8 >>>> cluster.shd-wait-qlength: 10000 >>>> features.shard: on >>>> user.cifs: off >>>> storage.owner-uid: 36 >>>> storage.owner-gid: 36 >>>> server.allow-insecure: on >>>> [root@n1 ~]# gluster volume status >>>> Status of volume: volume1 >>>> Gluster process TCP Port RDMA Port Online Pid >>>> ------------------------------------------------------------------------------ >>>> Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 >>>> Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 >>>> Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 >>>> Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 >>>> Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 >>>> Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 >>>> Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 >>>> Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 >>>> Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 >>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>> Task Status of Volume volume1 >>>> ------------------------------------------------------------------------------ >>>> There are no active volume tasks >>>> Status of volume: volume2 >>>> Gluster process TCP Port RDMA Port Online Pid >>>> ------------------------------------------------------------------------------ >>>> Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 >>>> Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 >>>> Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 >>>> Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 >>>> Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 >>>> Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 >>>> Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 >>>> Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 >>>> Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 >>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>> Task Status of Volume volume2 >>>> ------------------------------------------------------------------------------ >>>> There are no active volume tasks >>>> I think ovirt can't read valid informations about gluster. >>>> I can't contiune upgrade of other hosts until this problem exist.
>>>> Please help me:)
>>>> Thanks
>>>> Regards,
>>>> Tibor
>>>> _______________________________________________ >>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>> users-leave@ovirt.org ]
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ]
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

On Tue, May 15, 2018 at 1:28 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Could you explain how can I use this patch?
You can use the 4.2 nightly to test it out - http://resources.ovirt.org/pub/yum-repo/ovirt-release42-snapshot.rpm
R, Tibor
----- 2018. máj.. 14., 11:18, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
Sorry for my question, but can you tell me please how can I use this patch?
Thanks, Regards, Tibor ----- 2018. máj.. 14., 10:47, Sahina Bose <sabose@redhat.com> írta:
On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Could someone help me please ? I can't finish my upgrade process.
https://gerrit.ovirt.org/91164 should fix the error you're facing.
Can you elaborate why this is affecting the upgrade process?
Thanks R Tibor
----- 2018. máj.. 10., 12:51, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?
Thank you
Regards,
Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose <sabose@redhat.com> írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose <sabose@redhat.com> írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Thank you for your fast reply :)
2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261
This happening continuously.
Thanks! Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose <sabose@redhat.com> írta:
Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Dear Ovirt Users, I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3. I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node. I've checked in console, I think my gluster is not degraded:
root@n1 ~]# gluster volume list volume1 volume2 [root@n1 ~]# gluster volume info
Volume Name: volume1 Type: Distributed-Replicate Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on storage.owner-uid: 36 storage.owner-gid: 36 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on
Volume Name: volume2 Type: Distributed-Replicate Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet cluster.quorum-type: auto network.ping-timeout: 10 auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on [root@n1 ~]# gluster volume status Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume1 ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: volume2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume2 ------------------------------------------------------------ ------------------ There are no active volume tasks
I think ovirt can't read valid informations about gluster. I can't contiune upgrade of other hosts until this problem exist.
Please help me:)
Thanks
Regards,
Tibor
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
participants (3)
-
Demeter Tibor
-
Doug Ingham
-
Sahina Bose