Hi,Sorry for my question, but can you tell me please how can I use this patch?Thanks,Regards,Tibor
----- 2018. máj.. 14., 10:47, Sahina Bose <sabose@redhat.com> írta:On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:Hi,Could someone help me please ? I can't finish my upgrade process.https://gerrit.ovirt.org/91164 should fix the error you're facing.Can you elaborate why this is affecting the upgrade process?
ThanksRTibor
----- 2018. máj.. 10., 12:51, Demeter Tibor <tdemeter@itsmart.hu> írta:Hi,I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?Thank youRegards,Tibor
----- 2018. máj.. 10., 12:30, Sahina Bose <sabose@redhat.com> írta:There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloudOn Thu, May 10, 2018 at 3:35 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:Hi,
I found this:2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}'2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f582018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f582018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee469672018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee469672018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa2642018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 118aa2642018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef02018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e2552018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e2552018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc5342018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc5342018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}'2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}'2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}'2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}'2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}'2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}'2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}'2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': nullRTibor
----- 2018. máj.. 10., 11:43, Sahina Bose <sabose@redhat.com> írta:Or errors in engine.log of the form "Error while refreshing brick statuses for volume"This doesn't affect the monitoring of state.Any errors in vdsm.log?On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:Hi,Thank you for your fast reply :)2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb82018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb82018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a72612018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339'2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261This happening continuously.Thanks!Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose <sabose@redhat.com> írta:Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:Dear Ovirt Users,I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3.I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node.I've checked in console, I think my gluster is not degraded:root@n1 ~]# gluster volume listvolume1volume2[root@n1 ~]# gluster volume infoVolume Name: volume1Type: Distributed-ReplicateVolume ID: e0f568fa-987c-4f5c-b853-01bce718ee27Status: StartedSnapshot Count: 0Number of Bricks: 3 x 3 = 9Transport-type: tcpBricks:Brick1: 10.104.0.1:/gluster/brick/brick1Brick2: 10.104.0.2:/gluster/brick/brick1Brick3: 10.104.0.3:/gluster/brick/brick1Brick4: 10.104.0.1:/gluster/brick/brick2Brick5: 10.104.0.2:/gluster/brick/brick2Brick6: 10.104.0.3:/gluster/brick/brick2Brick7: 10.104.0.1:/gluster/brick/brick3Brick8: 10.104.0.2:/gluster/brick/brick3Brick9: 10.104.0.3:/gluster/brick/brick3Options Reconfigured:transport.address-family: inetperformance.readdir-ahead: onnfs.disable: onstorage.owner-uid: 36storage.owner-gid: 36performance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offperformance.low-prio-threads: 32network.remote-dio: enablecluster.eager-lock: enablecluster.quorum-type: autocluster.server-quorum-type: servercluster.data-self-heal-algorithm: fullcluster.locking-scheme: granularcluster.shd-max-threads: 8cluster.shd-wait-qlength: 10000features.shard: onuser.cifs: offserver.allow-insecure: onVolume Name: volume2Type: Distributed-ReplicateVolume ID: 68cfb061-1320-4042-abcd-9228da23c0c8Status: StartedSnapshot Count: 0Number of Bricks: 3 x 3 = 9Transport-type: tcpBricks:Brick1: 10.104.0.1:/gluster2/brick/brick1Brick2: 10.104.0.2:/gluster2/brick/brick1Brick3: 10.104.0.3:/gluster2/brick/brick1Brick4: 10.104.0.1:/gluster2/brick/brick2Brick5: 10.104.0.2:/gluster2/brick/brick2Brick6: 10.104.0.3:/gluster2/brick/brick2Brick7: 10.104.0.1:/gluster2/brick/brick3Brick8: 10.104.0.2:/gluster2/brick/brick3Brick9: 10.104.0.3:/gluster2/brick/brick3Options Reconfigured:nfs.disable: onperformance.readdir-ahead: ontransport.address-family: inetcluster.quorum-type: autonetwork.ping-timeout: 10auth.allow: *performance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offperformance.low-prio-threads: 32network.remote-dio: enablecluster.eager-lock: enablecluster.server-quorum-type: servercluster.data-self-heal-algorithm: fullcluster.locking-scheme: granularcluster.shd-max-threads: 8cluster.shd-wait-qlength: 10000features.shard: onuser.cifs: offstorage.owner-uid: 36storage.owner-gid: 36server.allow-insecure: on[root@n1 ~]# gluster volume statusStatus of volume: volume1Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520Self-heal Daemon on localhost N/A N/A Y 54356Self-heal Daemon on 10.104.0.2 N/A N/A Y 962Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603Task Status of Volume volume1------------------------------------------------------------------------------There are no active volume tasksStatus of volume: volume2Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------------------------Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541Self-heal Daemon on localhost N/A N/A Y 54356Self-heal Daemon on 10.104.0.2 N/A N/A Y 962Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603Task Status of Volume volume2------------------------------------------------------------------------------There are no active volume tasksI think ovirt can't read valid informations about gluster.I can't contiune upgrade of other hosts until this problem exist.Please help me:)ThanksRegards,Tibor
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org