Re: Gluster quorum

[+users] Can you provide the engine.log to see why the monitoring is not working here. thanks! On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Meanwhile, I did the upgrade engine, but the gluster state is same on my first node. I've attached some screenshot of my problem.
Thanks
Tibor
----- 2018. máj.. 16., 10:16, Demeter Tibor <tdemeter@itsmart.hu> írtaHi,
If 4.3.4 will release, i just have to remove the nightly repo and update to stable?
I'm sorry for my terrible English, I try to explain what was my problem with update. I'm upgraded from 4.1.8.
I followed up the official hosted-engine update documentation, that was not clear me, because it has referenced to a lot of old thing (i think). https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ https://www.ovirt.org/documentation/how-to/hosted- engine/#upgrade-hosted-engine
Maybe it need to update, because I had a lot of question under upgrade and I was not sure in all of necessary steps. For example, If I need to installing the new, 4.2 repo on the hosts, then need to remove the old repo from that? Why I need to do a" yum update -y" on hosts, meanwhile there is an "Updatehost" menu in the GUI? So, maybe it outdated. Since upgrade hosted engine, and the first node, I have problems with gluster. It seems to working fine if you check it from console "gluster volume status, etc" but not on the Gui, because now it yellow, and the brick reds in the first node.
Previously I did a mistake with glusterfs, my gluster config was wrong. I have corrected them, but it did not helped to me,gluster bricks are reds on my first node yet....
Now I try to upgrade to nightly, but I'm affraid, because it a living, productive system, and I don't have downtime. I hope it will help me.
Thanks for all,
Regards, Tibor Demeter
----- 2018. máj.. 16., 9:58, Sahina Bose <sabose@redhat.com> írta:
On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
is it a different, unstable repo? I have a productive cluster, how is safe that? I don't have any experience with nightly build. How can I use this? It have to install to the engine VM or all of my hosts? Thanks in advance for help me..
Only on the engine VM.
Regarding stability - it passes CI so relatively stable, beyond that there are no guarantees.
What's the specific problem you're facing with update? Can you elaborate?
Regards,
Tibor
----- 2018. máj.. 15., 9:58, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
Could you explain how can I use this patch?
R, Tibor
----- 2018. máj.. 14., 11:18, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
Sorry for my question, but can you tell me please how can I use this patch?
Thanks, Regards, Tibor ----- 2018. máj.. 14., 10:47, Sahina Bose <sabose@redhat.com> írta:
On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Could someone help me please ? I can't finish my upgrade process.
https://gerrit.ovirt.org/91164 should fix the error you're facing.
Can you elaborate why this is affecting the upgrade process?
Thanks R Tibor
----- 2018. máj.. 10., 12:51, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?
Thank you
Regards,
Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose <sabose@redhat.com> írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose <sabose@redhat.com> írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Thank you for your fast reply :)
2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261
This happening continuously.
Thanks! Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose <sabose@redhat.com> írta:
Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Dear Ovirt Users, I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 system to 4.2.3. I upgaded the first node with yum upgrade, it seems working now fine. But since upgrade, the gluster informations seems to displayed incorrect on the admin panel. The volume yellow, and there are red bricks from that node. I've checked in console, I think my gluster is not degraded:
root@n1 ~]# gluster volume list volume1 volume2 [root@n1 ~]# gluster volume info
Volume Name: volume1 Type: Distributed-Replicate Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster/brick/brick1 Brick2: 10.104.0.2:/gluster/brick/brick1 Brick3: 10.104.0.3:/gluster/brick/brick1 Brick4: 10.104.0.1:/gluster/brick/brick2 Brick5: 10.104.0.2:/gluster/brick/brick2 Brick6: 10.104.0.3:/gluster/brick/brick2 Brick7: 10.104.0.1:/gluster/brick/brick3 Brick8: 10.104.0.2:/gluster/brick/brick3 Brick9: 10.104.0.3:/gluster/brick/brick3 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on storage.owner-uid: 36 storage.owner-gid: 36 performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off server.allow-insecure: on
Volume Name: volume2 Type: Distributed-Replicate Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.104.0.1:/gluster2/brick/brick1 Brick2: 10.104.0.2:/gluster2/brick/brick1 Brick3: 10.104.0.3:/gluster2/brick/brick1 Brick4: 10.104.0.1:/gluster2/brick/brick2 Brick5: 10.104.0.2:/gluster2/brick/brick2 Brick6: 10.104.0.3:/gluster2/brick/brick2 Brick7: 10.104.0.1:/gluster2/brick/brick3 Brick8: 10.104.0.2:/gluster2/brick/brick3 Brick9: 10.104.0.3:/gluster2/brick/brick3 Options Reconfigured: nfs.disable: on performance.readdir-ahead: on transport.address-family: inet cluster.quorum-type: auto network.ping-timeout: 10 auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on [root@n1 ~]# gluster volume status Status of volume: volume1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume1 ------------------------------------------------------------ ------------------ There are no active volume tasks
Status of volume: volume2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------ ------------------ Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 Self-heal Daemon on localhost N/A N/A Y 54356 Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603
Task Status of Volume volume2 ------------------------------------------------------------ ------------------ There are no active volume tasks
I think ovirt can't read valid informations about gluster. I can't contiune upgrade of other hosts until this problem exist.
Please help me:)
Thanks
Regards,
Tibor
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives:

Hi, sure, Thank you for your time! R Tibor ----- 2018. máj.. 17., 12:19, Sahina Bose <sabose@redhat.com> írta:
[+users]
Can you provide the engine.log to see why the monitoring is not working here. thanks!
On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Meanwhile, I did the upgrade engine, but the gluster state is same on my first node. I've attached some screenshot of my problem.
Thanks
Tibor
----- 2018. máj.. 16., 10:16, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta Hi,
If 4.3.4 will release, i just have to remove the nightly repo and update to stable?
I'm sorry for my terrible English, I try to explain what was my problem with update. I'm upgraded from 4.1.8.
I followed up the official hosted-engine update documentation, that was not clear me, because it has referenced to a lot of old thing (i think). [ https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ | https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ ] [ https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-eng... | https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-eng... ]
Maybe it need to update, because I had a lot of question under upgrade and I was not sure in all of necessary steps. For example, If I need to installing the new, 4.2 repo on the hosts, then need to remove the old repo from that? Why I need to do a" yum update -y" on hosts, meanwhile there is an "Updatehost" menu in the GUI? So, maybe it outdated. Since upgrade hosted engine, and the first node, I have problems with gluster. It seems to working fine if you check it from console "gluster volume status, etc" but not on the Gui, because now it yellow, and the brick reds in the first node.
Previously I did a mistake with glusterfs, my gluster config was wrong. I have corrected them, but it did not helped to me,gluster bricks are reds on my first node yet....
Now I try to upgrade to nightly, but I'm affraid, because it a living, productive system, and I don't have downtime. I hope it will help me.
Thanks for all,
Regards, Tibor Demeter
----- 2018. máj.. 16., 9:58, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
is it a different, unstable repo? I have a productive cluster, how is safe that? I don't have any experience with nightly build. How can I use this? It have to install to the engine VM or all of my hosts? Thanks in advance for help me..
Only on the engine VM.
Regarding stability - it passes CI so relatively stable, beyond that there are no guarantees.
What's the specific problem you're facing with update? Can you elaborate?
Regards,
Tibor
----- 2018. máj.. 15., 9:58, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta:
Hi,
Could you explain how can I use this patch?
R, Tibor
----- 2018. máj.. 14., 11:18, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta:
> Hi,
> Sorry for my question, but can you tell me please how can I use this patch?
> Thanks, > Regards, > Tibor > ----- 2018. máj.. 14., 10:47, Sahina Bose < [ mailto:sabose@redhat.com | > sabose@redhat.com ] > írta:
>> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >> tdemeter@itsmart.hu ] > wrote:
>>> Hi,
>>> Could someone help me please ? I can't finish my upgrade process.
>> [ https://gerrit.ovirt.org/91164 | https://gerrit.ovirt.org/91164 ] should fix >> the error you're facing.
>> Can you elaborate why this is affecting the upgrade process?
>>> Thanks >>> R >>> Tibor
>>> ----- 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>> tdemeter@itsmart.hu ] > írta:
>>>> Hi,
>>>> I've attached the vdsm and supervdsm logs. But I don't have engine.log here, >>>> because that is on hosted engine vm. Should I send that ?
>>>> Thank you
>>>> Regards,
>>>> Tibor >>>> ----- 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sabose@redhat.com | >>>> sabose@redhat.com ] > írta:
>>>>> There's a bug here. Can you log one attaching this engine.log and also vdsm.log >>>>> & supervdsm.log from n3.itsmart.cloud
>>>>> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>> Hi,
>>>>>> I found this:
>>>>>> 2018-05-10 03:24:19,096+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, >>>>>> log id: 347435ae >>>>>> 2018-05-10 03:24:19,097+02 ERROR >>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) >>>>>> [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of >>>>>> cluster 'C6220': null >>>>>> 2018-05-10 03:24:19,097+02 INFO >>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) >>>>>> [7715ceda] Failed to acquire lock and wait lock >>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>> sharedLocks=''}' >>>>>> 2018-05-10 03:24:19,104+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), >>>>>> log id: 6908121d >>>>>> 2018-05-10 03:24:19,106+02 ERROR >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' >>>>>> execution failed: null >>>>>> 2018-05-10 03:24:19,106+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d >>>>>> 2018-05-10 03:24:19,107+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), >>>>>> log id: 735c6a5f >>>>>> 2018-05-10 03:24:19,109+02 ERROR >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' >>>>>> execution failed: null >>>>>> 2018-05-10 03:24:19,109+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f >>>>>> 2018-05-10 03:24:19,110+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>> log id: 6f9e9f58 >>>>>> 2018-05-10 03:24:19,112+02 ERROR >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' >>>>>> execution failed: null >>>>>> 2018-05-10 03:24:19,112+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 >>>>>> 2018-05-10 03:24:19,113+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), >>>>>> log id: 2ee46967 >>>>>> 2018-05-10 03:24:19,115+02 ERROR >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' >>>>>> execution failed: null >>>>>> 2018-05-10 03:24:19,116+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 >>>>>> 2018-05-10 03:24:19,117+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, >>>>>> GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', >>>>>> volumeName='volume1'}), log id: 7550e5c >>>>>> 2018-05-10 03:24:20,748+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, >>>>>> log id: 7550e5c >>>>>> 2018-05-10 03:24:20,749+02 ERROR >>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) >>>>>> [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of >>>>>> cluster 'C6220': null >>>>>> 2018-05-10 03:24:20,750+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>> (DefaultQuartzScheduler8) [7715ceda] START, >>>>>> GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>> log id: 120cc68d >>>>>> 2018-05-10 03:24:20,930+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>> (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, >>>>>> return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , >>>>>> n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log >>>>>> id: 120cc68d >>>>>> 2018-05-10 03:24:20,949+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>> (DefaultQuartzScheduler8) [7715ceda] START, >>>>>> GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, >>>>>> GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>> log id: 118aa264 >>>>>> 2018-05-10 03:24:21,048+02 WARN >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>> '10.104.0.1:/gluster/brick/brick1' of volume >>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>> 2018-05-10 03:24:21,055+02 WARN >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>> '10.104.0.1:/gluster/brick/brick2' of volume >>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>> 2018-05-10 03:24:21,061+02 WARN >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>> '10.104.0.1:/gluster/brick/brick3' of volume >>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>> 2018-05-10 03:24:21,067+02 WARN >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>> '10.104.0.1:/gluster2/brick/brick1' of volume >>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>> 2018-05-10 03:24:21,074+02 WARN >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>> '10.104.0.1:/gluster2/brick/brick2' of volume >>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>> 2018-05-10 03:24:21,080+02 WARN >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>> '10.104.0.1:/gluster2/brick/brick3' of volume >>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>> 2018-05-10 03:24:21,081+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>> (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, >>>>>> return: >>>>>> {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, >>>>>> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g >>>>>> luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
>>>>>> 2018-05-10 11:59:26,047+02 ERROR >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' >>>>>> execution failed: null >>>>>> 2018-05-10 11:59:26,047+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 >>>>>> 2018-05-10 11:59:26,048+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), >>>>>> log id: 28d9e255 >>>>>> 2018-05-10 11:59:26,051+02 ERROR >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' >>>>>> execution failed: null >>>>>> 2018-05-10 11:59:26,051+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 >>>>>> 2018-05-10 11:59:26,052+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>> log id: 4a7b280e >>>>>> 2018-05-10 11:59:26,054+02 ERROR >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' >>>>>> execution failed: null >>>>>> 2018-05-10 11:59:26,054+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e >>>>>> 2018-05-10 11:59:26,055+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), >>>>>> log id: 18adc534 >>>>>> 2018-05-10 11:59:26,057+02 ERROR >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' >>>>>> execution failed: null >>>>>> 2018-05-10 11:59:26,057+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 >>>>>> 2018-05-10 11:59:26,058+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, >>>>>> GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', >>>>>> volumeName='volume1'}), log id: 3451084f >>>>>> 2018-05-10 11:59:28,050+02 INFO >>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>> sharedLocks=''}' >>>>>> 2018-05-10 11:59:28,060+02 INFO >>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>> sharedLocks=''}' >>>>>> 2018-05-10 11:59:28,062+02 INFO >>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>> sharedLocks=''}' >>>>>> 2018-05-10 11:59:31,054+02 INFO >>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>> sharedLocks=''}' >>>>>> 2018-05-10 11:59:31,054+02 INFO >>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>> sharedLocks=''}' >>>>>> 2018-05-10 11:59:31,062+02 INFO >>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>> sharedLocks=''}' >>>>>> 2018-05-10 11:59:31,064+02 INFO >>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>> sharedLocks=''}' >>>>>> 2018-05-10 11:59:31,465+02 INFO >>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, >>>>>> log id: 3451084f >>>>>> 2018-05-10 11:59:31,466+02 ERROR >>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) >>>>>> [400fa486] Error while refreshing brick statuses for volume 'volume1' of >>>>>> cluster 'C6220': null
>>>>>> R >>>>>> Tibor
>>>>>> ----- 2018. máj.. 10., 11:43, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>> sabose@redhat.com ] > írta:
>>>>>>> This doesn't affect the monitoring of state. >>>>>>> Any errors in vdsm.log? >>>>>>> Or errors in engine.log of the form "Error while refreshing brick statuses for >>>>>>> volume"
>>>>>>> On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>> Hi,
>>>>>>>> Thank you for your fast reply :)
>>>>>>>> 2018-05-10 11:01:51,574+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] START, >>>>>>>> GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>> log id: 39adbbb8 >>>>>>>> 2018-05-10 11:01:51,768+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, >>>>>>>> return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , >>>>>>>> n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log >>>>>>>> id: 39adbbb8 >>>>>>>> 2018-05-10 11:01:51,788+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] START, >>>>>>>> GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>> GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>> log id: 738a7261 >>>>>>>> 2018-05-10 11:01:51,892+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>> '10.104.0.1:/gluster/brick/brick1' of volume >>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 11:01:51,898+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>> '10.104.0.1:/gluster/brick/brick2' of volume >>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 11:01:51,905+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>> '10.104.0.1:/gluster/brick/brick3' of volume >>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 11:01:51,911+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>> '10.104.0.1:/gluster2/brick/brick1' of volume >>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 11:01:51,917+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>> '10.104.0.1:/gluster2/brick/brick2' of volume >>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 11:01:51,924+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>> '10.104.0.1:/gluster2/brick/brick3' of volume >>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 11:01:51,925+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, >>>>>>>> return: >>>>>>>> {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, >>>>>>>> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, >>>>>>>> log id: 738a7261
>>>>>>>> This happening continuously.
>>>>>>>> Thanks! >>>>>>>> Tibor
>>>>>>>> ----- 2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>>>> sabose@redhat.com ] > írta:
>>>>>>>>> Could you check the engine.log if there are errors related to getting >>>>>>>>> GlusterVolumeAdvancedDetails ?
>>>>>>>>> On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>>>> Dear Ovirt Users, >>>>>>>>>> I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 >>>>>>>>>> system to 4.2.3. >>>>>>>>>> I upgaded the first node with yum upgrade, it seems working now fine. But since >>>>>>>>>> upgrade, the gluster informations seems to displayed incorrect on the admin >>>>>>>>>> panel. The volume yellow, and there are red bricks from that node. >>>>>>>>>> I've checked in console, I think my gluster is not degraded:
>>>>>>>>>> root@n1 ~]# gluster volume list >>>>>>>>>> volume1 >>>>>>>>>> volume2 >>>>>>>>>> [root@n1 ~]# gluster volume info >>>>>>>>>> Volume Name: volume1 >>>>>>>>>> Type: Distributed-Replicate >>>>>>>>>> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 >>>>>>>>>> Status: Started >>>>>>>>>> Snapshot Count: 0 >>>>>>>>>> Number of Bricks: 3 x 3 = 9 >>>>>>>>>> Transport-type: tcp >>>>>>>>>> Bricks: >>>>>>>>>> Brick1: 10.104.0.1:/gluster/brick/brick1 >>>>>>>>>> Brick2: 10.104.0.2:/gluster/brick/brick1 >>>>>>>>>> Brick3: 10.104.0.3:/gluster/brick/brick1 >>>>>>>>>> Brick4: 10.104.0.1:/gluster/brick/brick2 >>>>>>>>>> Brick5: 10.104.0.2:/gluster/brick/brick2 >>>>>>>>>> Brick6: 10.104.0.3:/gluster/brick/brick2 >>>>>>>>>> Brick7: 10.104.0.1:/gluster/brick/brick3 >>>>>>>>>> Brick8: 10.104.0.2:/gluster/brick/brick3 >>>>>>>>>> Brick9: 10.104.0.3:/gluster/brick/brick3 >>>>>>>>>> Options Reconfigured: >>>>>>>>>> transport.address-family: inet >>>>>>>>>> performance.readdir-ahead: on >>>>>>>>>> nfs.disable: on >>>>>>>>>> storage.owner-uid: 36 >>>>>>>>>> storage.owner-gid: 36 >>>>>>>>>> performance.quick-read: off >>>>>>>>>> performance.read-ahead: off >>>>>>>>>> performance.io-cache: off >>>>>>>>>> performance.stat-prefetch: off >>>>>>>>>> performance.low-prio-threads: 32 >>>>>>>>>> network.remote-dio: enable >>>>>>>>>> cluster.eager-lock: enable >>>>>>>>>> cluster.quorum-type: auto >>>>>>>>>> cluster.server-quorum-type: server >>>>>>>>>> cluster.data-self-heal-algorithm: full >>>>>>>>>> cluster.locking-scheme: granular >>>>>>>>>> cluster.shd-max-threads: 8 >>>>>>>>>> cluster.shd-wait-qlength: 10000 >>>>>>>>>> features.shard: on >>>>>>>>>> user.cifs: off >>>>>>>>>> server.allow-insecure: on >>>>>>>>>> Volume Name: volume2 >>>>>>>>>> Type: Distributed-Replicate >>>>>>>>>> Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 >>>>>>>>>> Status: Started >>>>>>>>>> Snapshot Count: 0 >>>>>>>>>> Number of Bricks: 3 x 3 = 9 >>>>>>>>>> Transport-type: tcp >>>>>>>>>> Bricks: >>>>>>>>>> Brick1: 10.104.0.1:/gluster2/brick/brick1 >>>>>>>>>> Brick2: 10.104.0.2:/gluster2/brick/brick1 >>>>>>>>>> Brick3: 10.104.0.3:/gluster2/brick/brick1 >>>>>>>>>> Brick4: 10.104.0.1:/gluster2/brick/brick2 >>>>>>>>>> Brick5: 10.104.0.2:/gluster2/brick/brick2 >>>>>>>>>> Brick6: 10.104.0.3:/gluster2/brick/brick2 >>>>>>>>>> Brick7: 10.104.0.1:/gluster2/brick/brick3 >>>>>>>>>> Brick8: 10.104.0.2:/gluster2/brick/brick3 >>>>>>>>>> Brick9: 10.104.0.3:/gluster2/brick/brick3 >>>>>>>>>> Options Reconfigured: >>>>>>>>>> nfs.disable: on >>>>>>>>>> performance.readdir-ahead: on >>>>>>>>>> transport.address-family: inet >>>>>>>>>> cluster.quorum-type: auto >>>>>>>>>> network.ping-timeout: 10 >>>>>>>>>> auth.allow: * >>>>>>>>>> performance.quick-read: off >>>>>>>>>> performance.read-ahead: off >>>>>>>>>> performance.io-cache: off >>>>>>>>>> performance.stat-prefetch: off >>>>>>>>>> performance.low-prio-threads: 32 >>>>>>>>>> network.remote-dio: enable >>>>>>>>>> cluster.eager-lock: enable >>>>>>>>>> cluster.server-quorum-type: server >>>>>>>>>> cluster.data-self-heal-algorithm: full >>>>>>>>>> cluster.locking-scheme: granular >>>>>>>>>> cluster.shd-max-threads: 8 >>>>>>>>>> cluster.shd-wait-qlength: 10000 >>>>>>>>>> features.shard: on >>>>>>>>>> user.cifs: off >>>>>>>>>> storage.owner-uid: 36 >>>>>>>>>> storage.owner-gid: 36 >>>>>>>>>> server.allow-insecure: on >>>>>>>>>> [root@n1 ~]# gluster volume status >>>>>>>>>> Status of volume: volume1 >>>>>>>>>> Gluster process TCP Port RDMA Port Online Pid >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 >>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 >>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 >>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 >>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 >>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 >>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 >>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 >>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 >>>>>>>>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>>>>>>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>>>>>>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>>>>>>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>>>>>>>> Task Status of Volume volume1 >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> There are no active volume tasks >>>>>>>>>> Status of volume: volume2 >>>>>>>>>> Gluster process TCP Port RDMA Port Online Pid >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 >>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 >>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 >>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 >>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 >>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 >>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 >>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 >>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 >>>>>>>>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>>>>>>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>>>>>>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>>>>>>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>>>>>>>> Task Status of Volume volume2 >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> There are no active volume tasks >>>>>>>>>> I think ovirt can't read valid informations about gluster. >>>>>>>>>> I can't contiune upgrade of other hosts until this problem exist.
>>>>>>>>>> Please help me:)
>>>>>>>>>> Thanks
>>>>>>>>>> Regards,
>>>>>>>>>> Tibor
>>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>>>>>>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>>>>>>>> users-leave@ovirt.org ]
>>>> _______________________________________________ >>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>> users-leave@ovirt.org ]
> _______________________________________________ > Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] > To unsubscribe send an email to [ mailto:users-leave@ovirt.org | > users-leave@ovirt.org ]
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ] oVirt Code of Conduct: [ https://www.ovirt.org/community/about/community-guidelines/ | https://www.ovirt.org/community/about/community-guidelines/ ] List Archives:

It doesn't look like the patch was applied. Still see the same error in engine.log "Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null"\ Did you use engine-setup to upgrade? What's the version of ovirt-engine currently installed? On Thu, May 17, 2018 at 5:10 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
sure,
Thank you for your time!
R Tibor
----- 2018. máj.. 17., 12:19, Sahina Bose <sabose@redhat.com> írta:
[+users]
Can you provide the engine.log to see why the monitoring is not working here. thanks!
On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Meanwhile, I did the upgrade engine, but the gluster state is same on my first node. I've attached some screenshot of my problem.
Thanks
Tibor
----- 2018. máj.. 16., 10:16, Demeter Tibor <tdemeter@itsmart.hu> írtaHi,
If 4.3.4 will release, i just have to remove the nightly repo and update to stable?
I'm sorry for my terrible English, I try to explain what was my problem with update. I'm upgraded from 4.1.8.
I followed up the official hosted-engine update documentation, that was not clear me, because it has referenced to a lot of old thing (i think). https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ https://www.ovirt.org/documentation/how-to/hosted- engine/#upgrade-hosted-engine
Maybe it need to update, because I had a lot of question under upgrade and I was not sure in all of necessary steps. For example, If I need to installing the new, 4.2 repo on the hosts, then need to remove the old repo from that? Why I need to do a" yum update -y" on hosts, meanwhile there is an "Updatehost" menu in the GUI? So, maybe it outdated. Since upgrade hosted engine, and the first node, I have problems with gluster. It seems to working fine if you check it from console "gluster volume status, etc" but not on the Gui, because now it yellow, and the brick reds in the first node.
Previously I did a mistake with glusterfs, my gluster config was wrong. I have corrected them, but it did not helped to me,gluster bricks are reds on my first node yet....
Now I try to upgrade to nightly, but I'm affraid, because it a living, productive system, and I don't have downtime. I hope it will help me.
Thanks for all,
Regards, Tibor Demeter
----- 2018. máj.. 16., 9:58, Sahina Bose <sabose@redhat.com> írta:
On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
is it a different, unstable repo? I have a productive cluster, how is safe that? I don't have any experience with nightly build. How can I use this? It have to install to the engine VM or all of my hosts? Thanks in advance for help me..
Only on the engine VM.
Regarding stability - it passes CI so relatively stable, beyond that there are no guarantees.
What's the specific problem you're facing with update? Can you elaborate?
Regards,
Tibor
----- 2018. máj.. 15., 9:58, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
Could you explain how can I use this patch?
R, Tibor
----- 2018. máj.. 14., 11:18, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
Sorry for my question, but can you tell me please how can I use this patch?
Thanks, Regards, Tibor ----- 2018. máj.. 14., 10:47, Sahina Bose <sabose@redhat.com> írta:
On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Could someone help me please ? I can't finish my upgrade process.
https://gerrit.ovirt.org/91164 should fix the error you're facing.
Can you elaborate why this is affecting the upgrade process?
Thanks R Tibor
----- 2018. máj.. 10., 12:51, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?
Thank you
Regards,
Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose <sabose@redhat.com> írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:/gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120- 000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose <sabose@redhat.com> írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Thank you for your fast reply :)
2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: /gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log id: 738a7261
This happening continuously.
Thanks! Tibor
----- 2018. máj.. 10., 10:56, Sahina Bose <sabose@redhat.com> írta:
Could you check the engine.log if there are errors related to getting GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
> Dear Ovirt Users, > I've followed up the self-hosted-engine upgrade documentation, I > upgraded my 4.1 system to 4.2.3. > I upgaded the first node with yum upgrade, it seems working now > fine. But since upgrade, the gluster informations seems to displayed > incorrect on the admin panel. The volume yellow, and there are red bricks > from that node. > I've checked in console, I think my gluster is not degraded: > > root@n1 ~]# gluster volume list > volume1 > volume2 > [root@n1 ~]# gluster volume info > > Volume Name: volume1 > Type: Distributed-Replicate > Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x 3 = 9 > Transport-type: tcp > Bricks: > Brick1: 10.104.0.1:/gluster/brick/brick1 > Brick2: 10.104.0.2:/gluster/brick/brick1 > Brick3: 10.104.0.3:/gluster/brick/brick1 > Brick4: 10.104.0.1:/gluster/brick/brick2 > Brick5: 10.104.0.2:/gluster/brick/brick2 > Brick6: 10.104.0.3:/gluster/brick/brick2 > Brick7: 10.104.0.1:/gluster/brick/brick3 > Brick8: 10.104.0.2:/gluster/brick/brick3 > Brick9: 10.104.0.3:/gluster/brick/brick3 > Options Reconfigured: > transport.address-family: inet > performance.readdir-ahead: on > nfs.disable: on > storage.owner-uid: 36 > storage.owner-gid: 36 > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > performance.low-prio-threads: 32 > network.remote-dio: enable > cluster.eager-lock: enable > cluster.quorum-type: auto > cluster.server-quorum-type: server > cluster.data-self-heal-algorithm: full > cluster.locking-scheme: granular > cluster.shd-max-threads: 8 > cluster.shd-wait-qlength: 10000 > features.shard: on > user.cifs: off > server.allow-insecure: on > > Volume Name: volume2 > Type: Distributed-Replicate > Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x 3 = 9 > Transport-type: tcp > Bricks: > Brick1: 10.104.0.1:/gluster2/brick/brick1 > Brick2: 10.104.0.2:/gluster2/brick/brick1 > Brick3: 10.104.0.3:/gluster2/brick/brick1 > Brick4: 10.104.0.1:/gluster2/brick/brick2 > Brick5: 10.104.0.2:/gluster2/brick/brick2 > Brick6: 10.104.0.3:/gluster2/brick/brick2 > Brick7: 10.104.0.1:/gluster2/brick/brick3 > Brick8: 10.104.0.2:/gluster2/brick/brick3 > Brick9: 10.104.0.3:/gluster2/brick/brick3 > Options Reconfigured: > nfs.disable: on > performance.readdir-ahead: on > transport.address-family: inet > cluster.quorum-type: auto > network.ping-timeout: 10 > auth.allow: * > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > performance.low-prio-threads: 32 > network.remote-dio: enable > cluster.eager-lock: enable > cluster.server-quorum-type: server > cluster.data-self-heal-algorithm: full > cluster.locking-scheme: granular > cluster.shd-max-threads: 8 > cluster.shd-wait-qlength: 10000 > features.shard: on > user.cifs: off > storage.owner-uid: 36 > storage.owner-gid: 36 > server.allow-insecure: on > [root@n1 ~]# gluster volume status > Status of volume: volume1 > Gluster process TCP Port RDMA Port > Online Pid > ------------------------------------------------------------ > ------------------ > Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y > 3464 > Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y > 68937 > Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y > 94506 > Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y > 3457 > Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y > 68943 > Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y > 94514 > Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y > 3465 > Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y > 68949 > Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y > 94520 > Self-heal Daemon on localhost N/A N/A Y > 54356 > Self-heal Daemon on 10.104.0.2 N/A N/A Y > 962 > Self-heal Daemon on 10.104.0.3 N/A N/A Y > 108977 > Self-heal Daemon on 10.104.0.4 N/A N/A Y > 61603 > > Task Status of Volume volume1 > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > Status of volume: volume2 > Gluster process TCP Port RDMA Port > Online Pid > ------------------------------------------------------------ > ------------------ > Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y > 3852 > Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y > 68955 > Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y > 94527 > Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y > 3851 > Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y > 68961 > Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y > 94533 > Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y > 3883 > Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y > 68968 > Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y > 94541 > Self-heal Daemon on localhost N/A N/A Y > 54356 > Self-heal Daemon on 10.104.0.2 N/A N/A Y > 962 > Self-heal Daemon on 10.104.0.3 N/A N/A Y > 108977 > Self-heal Daemon on 10.104.0.4 N/A N/A Y > 61603 > > Task Status of Volume volume2 > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > I think ovirt can't read valid informations about gluster. > I can't contiune upgrade of other hosts until this problem exist. > > Please help me:) > > > Thanks > > Regards, > > Tibor > > > _______________________________________________ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-leave@ovirt.org > >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives:

Hi, 4.2.4-0.0.master.20180515183442.git00e1340.el7.centos Firstly, I did a yum update "ovirt-*-setup*" second, I have ran engine-setup to upgrade. I didn't remove the old repos, just installed the nightly repo. Thank you again, Regards, Tibor ----- 2018. máj.. 17., 15:02, Sahina Bose <sabose@redhat.com> írta:
It doesn't look like the patch was applied. Still see the same error in engine.log "Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null"\
Did you use engine-setup to upgrade? What's the version of ovirt-engine currently installed?
On Thu, May 17, 2018 at 5:10 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
sure,
Thank you for your time!
R Tibor
----- 2018. máj.. 17., 12:19, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
[+users]
Can you provide the engine.log to see why the monitoring is not working here. thanks!
On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Meanwhile, I did the upgrade engine, but the gluster state is same on my first node. I've attached some screenshot of my problem.
Thanks
Tibor
----- 2018. máj.. 16., 10:16, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta Hi,
If 4.3.4 will release, i just have to remove the nightly repo and update to stable?
I'm sorry for my terrible English, I try to explain what was my problem with update. I'm upgraded from 4.1.8.
I followed up the official hosted-engine update documentation, that was not clear me, because it has referenced to a lot of old thing (i think). [ https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ | https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ ] [ https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-eng... | https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-eng... ]
Maybe it need to update, because I had a lot of question under upgrade and I was not sure in all of necessary steps. For example, If I need to installing the new, 4.2 repo on the hosts, then need to remove the old repo from that? Why I need to do a" yum update -y" on hosts, meanwhile there is an "Updatehost" menu in the GUI? So, maybe it outdated. Since upgrade hosted engine, and the first node, I have problems with gluster. It seems to working fine if you check it from console "gluster volume status, etc" but not on the Gui, because now it yellow, and the brick reds in the first node.
Previously I did a mistake with glusterfs, my gluster config was wrong. I have corrected them, but it did not helped to me,gluster bricks are reds on my first node yet....
Now I try to upgrade to nightly, but I'm affraid, because it a living, productive system, and I don't have downtime. I hope it will help me.
Thanks for all,
Regards, Tibor Demeter
----- 2018. máj.. 16., 9:58, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
> Hi,
> is it a different, unstable repo? I have a productive cluster, how is safe that? > I don't have any experience with nightly build. How can I use this? It have to > install to the engine VM or all of my hosts? > Thanks in advance for help me..
Only on the engine VM.
Regarding stability - it passes CI so relatively stable, beyond that there are no guarantees.
What's the specific problem you're facing with update? Can you elaborate?
> Regards,
> Tibor
> ----- 2018. máj.. 15., 9:58, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | > tdemeter@itsmart.hu ] > írta:
>> Hi,
>> Could you explain how can I use this patch?
>> R, >> Tibor
>> ----- 2018. máj.. 14., 11:18, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >> tdemeter@itsmart.hu ] > írta:
>>> Hi,
>>> Sorry for my question, but can you tell me please how can I use this patch?
>>> Thanks, >>> Regards, >>> Tibor >>> ----- 2018. máj.. 14., 10:47, Sahina Bose < [ mailto:sabose@redhat.com | >>> sabose@redhat.com ] > írta:
>>>> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>> tdemeter@itsmart.hu ] > wrote:
>>>>> Hi,
>>>>> Could someone help me please ? I can't finish my upgrade process.
>>>> [ https://gerrit.ovirt.org/91164 | https://gerrit.ovirt.org/91164 ] should fix >>>> the error you're facing.
>>>> Can you elaborate why this is affecting the upgrade process?
>>>>> Thanks >>>>> R >>>>> Tibor
>>>>> ----- 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>> tdemeter@itsmart.hu ] > írta:
>>>>>> Hi,
>>>>>> I've attached the vdsm and supervdsm logs. But I don't have engine.log here, >>>>>> because that is on hosted engine vm. Should I send that ?
>>>>>> Thank you
>>>>>> Regards,
>>>>>> Tibor >>>>>> ----- 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>> sabose@redhat.com ] > írta:
>>>>>>> There's a bug here. Can you log one attaching this engine.log and also vdsm.log >>>>>>> & supervdsm.log from n3.itsmart.cloud
>>>>>>> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>> Hi,
>>>>>>>> I found this:
>>>>>>>> 2018-05-10 03:24:19,096+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, >>>>>>>> log id: 347435ae >>>>>>>> 2018-05-10 03:24:19,097+02 ERROR >>>>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) >>>>>>>> [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of >>>>>>>> cluster 'C6220': null >>>>>>>> 2018-05-10 03:24:19,097+02 INFO >>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) >>>>>>>> [7715ceda] Failed to acquire lock and wait lock >>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>> sharedLocks=''}' >>>>>>>> 2018-05-10 03:24:19,104+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), >>>>>>>> log id: 6908121d >>>>>>>> 2018-05-10 03:24:19,106+02 ERROR >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' >>>>>>>> execution failed: null >>>>>>>> 2018-05-10 03:24:19,106+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d >>>>>>>> 2018-05-10 03:24:19,107+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), >>>>>>>> log id: 735c6a5f >>>>>>>> 2018-05-10 03:24:19,109+02 ERROR >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' >>>>>>>> execution failed: null >>>>>>>> 2018-05-10 03:24:19,109+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f >>>>>>>> 2018-05-10 03:24:19,110+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>> log id: 6f9e9f58 >>>>>>>> 2018-05-10 03:24:19,112+02 ERROR >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' >>>>>>>> execution failed: null >>>>>>>> 2018-05-10 03:24:19,112+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 >>>>>>>> 2018-05-10 03:24:19,113+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), >>>>>>>> log id: 2ee46967 >>>>>>>> 2018-05-10 03:24:19,115+02 ERROR >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' >>>>>>>> execution failed: null >>>>>>>> 2018-05-10 03:24:19,116+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 >>>>>>>> 2018-05-10 03:24:19,117+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>> GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', >>>>>>>> volumeName='volume1'}), log id: 7550e5c >>>>>>>> 2018-05-10 03:24:20,748+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, >>>>>>>> log id: 7550e5c >>>>>>>> 2018-05-10 03:24:20,749+02 ERROR >>>>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) >>>>>>>> [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of >>>>>>>> cluster 'C6220': null >>>>>>>> 2018-05-10 03:24:20,750+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>> (DefaultQuartzScheduler8) [7715ceda] START, >>>>>>>> GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>> log id: 120cc68d >>>>>>>> 2018-05-10 03:24:20,930+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>> (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, >>>>>>>> return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , >>>>>>>> n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log >>>>>>>> id: 120cc68d >>>>>>>> 2018-05-10 03:24:20,949+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>> (DefaultQuartzScheduler8) [7715ceda] START, >>>>>>>> GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>> GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>> log id: 118aa264 >>>>>>>> 2018-05-10 03:24:21,048+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>> '10.104.0.1:/gluster/brick/brick1' of volume >>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 03:24:21,055+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>> '10.104.0.1:/gluster/brick/brick2' of volume >>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 03:24:21,061+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>> '10.104.0.1:/gluster/brick/brick3' of volume >>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 03:24:21,067+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>> '10.104.0.1:/gluster2/brick/brick1' of volume >>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 03:24:21,074+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>> '10.104.0.1:/gluster2/brick/brick2' of volume >>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 03:24:21,080+02 WARN >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>> '10.104.0.1:/gluster2/brick/brick3' of volume >>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>> 2018-05-10 03:24:21,081+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>> (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, >>>>>>>> return: >>>>>>>> {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, >>>>>>>> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g >>>>>>>> luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
>>>>>>>> 2018-05-10 11:59:26,047+02 ERROR >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' >>>>>>>> execution failed: null >>>>>>>> 2018-05-10 11:59:26,047+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 >>>>>>>> 2018-05-10 11:59:26,048+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), >>>>>>>> log id: 28d9e255 >>>>>>>> 2018-05-10 11:59:26,051+02 ERROR >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' >>>>>>>> execution failed: null >>>>>>>> 2018-05-10 11:59:26,051+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 >>>>>>>> 2018-05-10 11:59:26,052+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>> log id: 4a7b280e >>>>>>>> 2018-05-10 11:59:26,054+02 ERROR >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' >>>>>>>> execution failed: null >>>>>>>> 2018-05-10 11:59:26,054+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e >>>>>>>> 2018-05-10 11:59:26,055+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), >>>>>>>> log id: 18adc534 >>>>>>>> 2018-05-10 11:59:26,057+02 ERROR >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' >>>>>>>> execution failed: null >>>>>>>> 2018-05-10 11:59:26,057+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 >>>>>>>> 2018-05-10 11:59:26,058+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>> GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', >>>>>>>> volumeName='volume1'}), log id: 3451084f >>>>>>>> 2018-05-10 11:59:28,050+02 INFO >>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>> sharedLocks=''}' >>>>>>>> 2018-05-10 11:59:28,060+02 INFO >>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>> sharedLocks=''}' >>>>>>>> 2018-05-10 11:59:28,062+02 INFO >>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>> sharedLocks=''}' >>>>>>>> 2018-05-10 11:59:31,054+02 INFO >>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>> sharedLocks=''}' >>>>>>>> 2018-05-10 11:59:31,054+02 INFO >>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>> sharedLocks=''}' >>>>>>>> 2018-05-10 11:59:31,062+02 INFO >>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>> sharedLocks=''}' >>>>>>>> 2018-05-10 11:59:31,064+02 INFO >>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>> sharedLocks=''}' >>>>>>>> 2018-05-10 11:59:31,465+02 INFO >>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, >>>>>>>> log id: 3451084f >>>>>>>> 2018-05-10 11:59:31,466+02 ERROR >>>>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) >>>>>>>> [400fa486] Error while refreshing brick statuses for volume 'volume1' of >>>>>>>> cluster 'C6220': null
>>>>>>>> R >>>>>>>> Tibor
>>>>>>>> ----- 2018. máj.. 10., 11:43, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>>>> sabose@redhat.com ] > írta:
>>>>>>>>> This doesn't affect the monitoring of state. >>>>>>>>> Any errors in vdsm.log? >>>>>>>>> Or errors in engine.log of the form "Error while refreshing brick statuses for >>>>>>>>> volume"
>>>>>>>>> On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>>>> Hi,
>>>>>>>>>> Thank you for your fast reply :)
>>>>>>>>>> 2018-05-10 11:01:51,574+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] START, >>>>>>>>>> GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>> log id: 39adbbb8 >>>>>>>>>> 2018-05-10 11:01:51,768+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, >>>>>>>>>> return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , >>>>>>>>>> n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log >>>>>>>>>> id: 39adbbb8 >>>>>>>>>> 2018-05-10 11:01:51,788+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] START, >>>>>>>>>> GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>> GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>> log id: 738a7261 >>>>>>>>>> 2018-05-10 11:01:51,892+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster/brick/brick1' of volume >>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 11:01:51,898+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster/brick/brick2' of volume >>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 11:01:51,905+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster/brick/brick3' of volume >>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 11:01:51,911+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster2/brick/brick1' of volume >>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 11:01:51,917+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster2/brick/brick2' of volume >>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 11:01:51,924+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster2/brick/brick3' of volume >>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 11:01:51,925+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, >>>>>>>>>> return: >>>>>>>>>> {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, >>>>>>>>>> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, >>>>>>>>>> log id: 738a7261
>>>>>>>>>> This happening continuously.
>>>>>>>>>> Thanks! >>>>>>>>>> Tibor
>>>>>>>>>> ----- 2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>>>>>> sabose@redhat.com ] > írta:
>>>>>>>>>>> Could you check the engine.log if there are errors related to getting >>>>>>>>>>> GlusterVolumeAdvancedDetails ?
>>>>>>>>>>> On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>>>>>> Dear Ovirt Users, >>>>>>>>>>>> I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 >>>>>>>>>>>> system to 4.2.3. >>>>>>>>>>>> I upgaded the first node with yum upgrade, it seems working now fine. But since >>>>>>>>>>>> upgrade, the gluster informations seems to displayed incorrect on the admin >>>>>>>>>>>> panel. The volume yellow, and there are red bricks from that node. >>>>>>>>>>>> I've checked in console, I think my gluster is not degraded:
>>>>>>>>>>>> root@n1 ~]# gluster volume list >>>>>>>>>>>> volume1 >>>>>>>>>>>> volume2 >>>>>>>>>>>> [root@n1 ~]# gluster volume info >>>>>>>>>>>> Volume Name: volume1 >>>>>>>>>>>> Type: Distributed-Replicate >>>>>>>>>>>> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 >>>>>>>>>>>> Status: Started >>>>>>>>>>>> Snapshot Count: 0 >>>>>>>>>>>> Number of Bricks: 3 x 3 = 9 >>>>>>>>>>>> Transport-type: tcp >>>>>>>>>>>> Bricks: >>>>>>>>>>>> Brick1: 10.104.0.1:/gluster/brick/brick1 >>>>>>>>>>>> Brick2: 10.104.0.2:/gluster/brick/brick1 >>>>>>>>>>>> Brick3: 10.104.0.3:/gluster/brick/brick1 >>>>>>>>>>>> Brick4: 10.104.0.1:/gluster/brick/brick2 >>>>>>>>>>>> Brick5: 10.104.0.2:/gluster/brick/brick2 >>>>>>>>>>>> Brick6: 10.104.0.3:/gluster/brick/brick2 >>>>>>>>>>>> Brick7: 10.104.0.1:/gluster/brick/brick3 >>>>>>>>>>>> Brick8: 10.104.0.2:/gluster/brick/brick3 >>>>>>>>>>>> Brick9: 10.104.0.3:/gluster/brick/brick3 >>>>>>>>>>>> Options Reconfigured: >>>>>>>>>>>> transport.address-family: inet >>>>>>>>>>>> performance.readdir-ahead: on >>>>>>>>>>>> nfs.disable: on >>>>>>>>>>>> storage.owner-uid: 36 >>>>>>>>>>>> storage.owner-gid: 36 >>>>>>>>>>>> performance.quick-read: off >>>>>>>>>>>> performance.read-ahead: off >>>>>>>>>>>> performance.io-cache: off >>>>>>>>>>>> performance.stat-prefetch: off >>>>>>>>>>>> performance.low-prio-threads: 32 >>>>>>>>>>>> network.remote-dio: enable >>>>>>>>>>>> cluster.eager-lock: enable >>>>>>>>>>>> cluster.quorum-type: auto >>>>>>>>>>>> cluster.server-quorum-type: server >>>>>>>>>>>> cluster.data-self-heal-algorithm: full >>>>>>>>>>>> cluster.locking-scheme: granular >>>>>>>>>>>> cluster.shd-max-threads: 8 >>>>>>>>>>>> cluster.shd-wait-qlength: 10000 >>>>>>>>>>>> features.shard: on >>>>>>>>>>>> user.cifs: off >>>>>>>>>>>> server.allow-insecure: on >>>>>>>>>>>> Volume Name: volume2 >>>>>>>>>>>> Type: Distributed-Replicate >>>>>>>>>>>> Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 >>>>>>>>>>>> Status: Started >>>>>>>>>>>> Snapshot Count: 0 >>>>>>>>>>>> Number of Bricks: 3 x 3 = 9 >>>>>>>>>>>> Transport-type: tcp >>>>>>>>>>>> Bricks: >>>>>>>>>>>> Brick1: 10.104.0.1:/gluster2/brick/brick1 >>>>>>>>>>>> Brick2: 10.104.0.2:/gluster2/brick/brick1 >>>>>>>>>>>> Brick3: 10.104.0.3:/gluster2/brick/brick1 >>>>>>>>>>>> Brick4: 10.104.0.1:/gluster2/brick/brick2 >>>>>>>>>>>> Brick5: 10.104.0.2:/gluster2/brick/brick2 >>>>>>>>>>>> Brick6: 10.104.0.3:/gluster2/brick/brick2 >>>>>>>>>>>> Brick7: 10.104.0.1:/gluster2/brick/brick3 >>>>>>>>>>>> Brick8: 10.104.0.2:/gluster2/brick/brick3 >>>>>>>>>>>> Brick9: 10.104.0.3:/gluster2/brick/brick3 >>>>>>>>>>>> Options Reconfigured: >>>>>>>>>>>> nfs.disable: on >>>>>>>>>>>> performance.readdir-ahead: on >>>>>>>>>>>> transport.address-family: inet >>>>>>>>>>>> cluster.quorum-type: auto >>>>>>>>>>>> network.ping-timeout: 10 >>>>>>>>>>>> auth.allow: * >>>>>>>>>>>> performance.quick-read: off >>>>>>>>>>>> performance.read-ahead: off >>>>>>>>>>>> performance.io-cache: off >>>>>>>>>>>> performance.stat-prefetch: off >>>>>>>>>>>> performance.low-prio-threads: 32 >>>>>>>>>>>> network.remote-dio: enable >>>>>>>>>>>> cluster.eager-lock: enable >>>>>>>>>>>> cluster.server-quorum-type: server >>>>>>>>>>>> cluster.data-self-heal-algorithm: full >>>>>>>>>>>> cluster.locking-scheme: granular >>>>>>>>>>>> cluster.shd-max-threads: 8 >>>>>>>>>>>> cluster.shd-wait-qlength: 10000 >>>>>>>>>>>> features.shard: on >>>>>>>>>>>> user.cifs: off >>>>>>>>>>>> storage.owner-uid: 36 >>>>>>>>>>>> storage.owner-gid: 36 >>>>>>>>>>>> server.allow-insecure: on >>>>>>>>>>>> [root@n1 ~]# gluster volume status >>>>>>>>>>>> Status of volume: volume1 >>>>>>>>>>>> Gluster process TCP Port RDMA Port Online Pid >>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 >>>>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 >>>>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 >>>>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 >>>>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 >>>>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 >>>>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 >>>>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 >>>>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 >>>>>>>>>>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>>>>>>>>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>>>>>>>>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>>>>>>>>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>>>>>>>>>> Task Status of Volume volume1 >>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>> There are no active volume tasks >>>>>>>>>>>> Status of volume: volume2 >>>>>>>>>>>> Gluster process TCP Port RDMA Port Online Pid >>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 >>>>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 >>>>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 >>>>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 >>>>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 >>>>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 >>>>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 >>>>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 >>>>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 >>>>>>>>>>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>>>>>>>>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>>>>>>>>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>>>>>>>>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>>>>>>>>>> Task Status of Volume volume2 >>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>> There are no active volume tasks >>>>>>>>>>>> I think ovirt can't read valid informations about gluster. >>>>>>>>>>>> I can't contiune upgrade of other hosts until this problem exist.
>>>>>>>>>>>> Please help me:)
>>>>>>>>>>>> Thanks
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Tibor
>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>>>>>>>>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>>>>>>>>>> users-leave@ovirt.org ]
>>>>>> _______________________________________________ >>>>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>>>> users-leave@ovirt.org ]
>>> _______________________________________________ >>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>> users-leave@ovirt.org ]
>> _______________________________________________ >> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >> users-leave@ovirt.org ] >> oVirt Code of Conduct: [ >> https://www.ovirt.org/community/about/community-guidelines/ | >> https://www.ovirt.org/community/about/community-guidelines/ ] >> List Archives:

Thanks for reporting this. https://gerrit.ovirt.org/91375 fixes this. I've re-opened bug https://bugzilla.redhat.com/show_bug.cgi?id=1574508 On Thu, May 17, 2018 at 10:12 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
4.2.4-0.0.master.20180515183442.git00e1340.el7.centos
Firstly, I did a yum update "ovirt-*-setup*" second, I have ran engine-setup to upgrade.
I didn't remove the old repos, just installed the nightly repo.
Thank you again,
Regards,
Tibor
----- 2018. máj.. 17., 15:02, Sahina Bose <sabose@redhat.com> írta:
It doesn't look like the patch was applied. Still see the same error in engine.log "Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null"\
Did you use engine-setup to upgrade? What's the version of ovirt-engine currently installed?
On Thu, May 17, 2018 at 5:10 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
sure,
Thank you for your time!
R Tibor
----- 2018. máj.. 17., 12:19, Sahina Bose <sabose@redhat.com> írta:
[+users]
Can you provide the engine.log to see why the monitoring is not working here. thanks!
On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Meanwhile, I did the upgrade engine, but the gluster state is same on my first node. I've attached some screenshot of my problem.
Thanks
Tibor
----- 2018. máj.. 16., 10:16, Demeter Tibor <tdemeter@itsmart.hu> írta Hi,
If 4.3.4 will release, i just have to remove the nightly repo and update to stable?
I'm sorry for my terrible English, I try to explain what was my problem with update. I'm upgraded from 4.1.8.
I followed up the official hosted-engine update documentation, that was not clear me, because it has referenced to a lot of old thing (i think). https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ https://www.ovirt.org/documentation/how-to/hosted- engine/#upgrade-hosted-engine
Maybe it need to update, because I had a lot of question under upgrade and I was not sure in all of necessary steps. For example, If I need to installing the new, 4.2 repo on the hosts, then need to remove the old repo from that? Why I need to do a" yum update -y" on hosts, meanwhile there is an "Updatehost" menu in the GUI? So, maybe it outdated. Since upgrade hosted engine, and the first node, I have problems with gluster. It seems to working fine if you check it from console "gluster volume status, etc" but not on the Gui, because now it yellow, and the brick reds in the first node.
Previously I did a mistake with glusterfs, my gluster config was wrong. I have corrected them, but it did not helped to me,gluster bricks are reds on my first node yet....
Now I try to upgrade to nightly, but I'm affraid, because it a living, productive system, and I don't have downtime. I hope it will help me.
Thanks for all,
Regards, Tibor Demeter
----- 2018. máj.. 16., 9:58, Sahina Bose <sabose@redhat.com> írta:
On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
is it a different, unstable repo? I have a productive cluster, how is safe that? I don't have any experience with nightly build. How can I use this? It have to install to the engine VM or all of my hosts? Thanks in advance for help me..
Only on the engine VM.
Regarding stability - it passes CI so relatively stable, beyond that there are no guarantees.
What's the specific problem you're facing with update? Can you elaborate?
Regards,
Tibor
----- 2018. máj.. 15., 9:58, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
Could you explain how can I use this patch?
R, Tibor
----- 2018. máj.. 14., 11:18, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
Sorry for my question, but can you tell me please how can I use this patch?
Thanks, Regards, Tibor ----- 2018. máj.. 14., 10:47, Sahina Bose <sabose@redhat.com> írta:
On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Could someone help me please ? I can't finish my upgrade process.
https://gerrit.ovirt.org/91164 should fix the error you're facing.
Can you elaborate why this is affecting the upgrade process?
Thanks R Tibor
----- 2018. máj.. 10., 12:51, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I've attached the vdsm and supervdsm logs. But I don't have engine.log here, because that is on hosted engine vm. Should I send that ?
Thank you
Regards,
Tibor ----- 2018. máj.. 10., 12:30, Sahina Bose <sabose@redhat.com> írta:
There's a bug here. Can you log one attaching this engine.log and also vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae 2018-05-10 03:24:19,097+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of cluster 'C6220': null 2018-05-10 03:24:19,097+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 03:24:19,104+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 03:24:19,106+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 2018-05-10 03:24:19,107+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 03:24:19,109+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 2018-05-10 03:24:19,110+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 03:24:19,112+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 2018-05-10 03:24:19,113+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 03:24:19,116+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 2018-05-10 03:24:19,117+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', volumeName='volume1'}), log id: 7550e5c 2018-05-10 03:24:20,748+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7) [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@4a46066f, log id: 7550e5c 2018-05-10 03:24:20,749+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null 2018-05-10 03:24:20,750+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 120cc68d 2018-05-10 03:24:20,930+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 120cc68d 2018-05-10 03:24:20,949+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] START, GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, GlusterVolumesListVDSParameter s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 118aa264 2018-05-10 03:24:21,048+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,055+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,061+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,067+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,074+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,080+02 WARN [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1: /gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster network found in cluster '59c10db3-0324-0320-0120-000000000339' 2018-05-10 03:24:21,081+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. core.common.businessentities.g luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
2018-05-10 11:59:26,047+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null 2018-05-10 11:59:26,047+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 2018-05-10 11:59:26,048+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 28d9e255 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null 2018-05-10 11:59:26,051+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 2018-05-10 11:59:26,052+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 4a7b280e 2018-05-10 11:59:26,054+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null 2018-05-10 11:59:26,054+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e 2018-05-10 11:59:26,055+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 18adc534 2018-05-10 11:59:26,057+02 ERROR [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] Command ' GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, VdsIdVDSCommandParametersBase: {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null 2018-05-10 11:59:26,057+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 2018-05-10 11:59:26,058+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, GlusterVolumeAdvancedDetailsVD SParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', volumeName='volume1'}), log id: 3451084f 2018-05-10 11:59:28,050+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,060+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:28,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,054+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,062+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,064+02 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) [2eb1c389] Failed to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', sharedLocks=''}' 2018-05-10 11:59:31,465+02 INFO [org.ovirt.engine.core. vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler4) [400fa486] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster. GlusterVolumeAdvancedDetails@3f1b7f43, log id: 3451084f 2018-05-10 11:59:31,466+02 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) [400fa486] Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null
R Tibor
----- 2018. máj.. 10., 11:43, Sahina Bose <sabose@redhat.com> írta:
This doesn't affect the monitoring of state. Any errors in vdsm.log? Or errors in engine.log of the form "Error while refreshing brick statuses for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
> Hi, > > Thank you for your fast reply :) > > > 2018-05-10 11:01:51,574+02 INFO [org.ovirt.engine.core. > vdsbroker.gluster.GlusterServersListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterServersListVDSCommand(HostName > = n2.itsmart.cloud, VdsIdVDSCommandParametersBase: > {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8 > 2018-05-10 11:01:51,768+02 INFO [org.ovirt.engine.core. > vdsbroker.gluster.GlusterServersListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, > return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, > 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 > 2018-05-10 11:01:51,788+02 INFO [org.ovirt.engine.core. > vdsbroker.gluster.GlusterVolumesListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] START, GlusterVolumesListVDSCommand(HostName > = n2.itsmart.cloud, GlusterVolumesListVDSParameter > s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261 > 2018-05-10 11:01:51,892+02 WARN [org.ovirt.engine.core. > vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: > /gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' > with correct network as no gluster network found in cluster > '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,898+02 WARN [org.ovirt.engine.core. > vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: > /gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' > with correct network as no gluster network found in cluster > '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,905+02 WARN [org.ovirt.engine.core. > vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: > /gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27' > with correct network as no gluster network found in cluster > '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,911+02 WARN [org.ovirt.engine.core. > vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: > /gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' > with correct network as no gluster network found in cluster > '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,917+02 WARN [org.ovirt.engine.core. > vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: > /gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' > with correct network as no gluster network found in cluster > '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,924+02 WARN [org.ovirt.engine.core. > vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1: > /gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8' > with correct network as no gluster network found in cluster > '59c10db3-0324-0320-0120-000000000339' > 2018-05-10 11:01:51,925+02 INFO [org.ovirt.engine.core. > vdsbroker.gluster.GlusterVolumesListVDSCommand] > (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, > return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine. > core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, > e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine. > core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, > log id: 738a7261 > > > This happening continuously. > > Thanks! > Tibor > > > > ----- 2018. máj.. 10., 10:56, Sahina Bose <sabose@redhat.com> írta: > > Could you check the engine.log if there are errors related to > getting GlusterVolumeAdvancedDetails ? > > On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor <tdemeter@itsmart.hu> > wrote: > >> Dear Ovirt Users, >> I've followed up the self-hosted-engine upgrade documentation, I >> upgraded my 4.1 system to 4.2.3. >> I upgaded the first node with yum upgrade, it seems working now >> fine. But since upgrade, the gluster informations seems to displayed >> incorrect on the admin panel. The volume yellow, and there are red bricks >> from that node. >> I've checked in console, I think my gluster is not degraded: >> >> root@n1 ~]# gluster volume list >> volume1 >> volume2 >> [root@n1 ~]# gluster volume info >> >> Volume Name: volume1 >> Type: Distributed-Replicate >> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 3 x 3 = 9 >> Transport-type: tcp >> Bricks: >> Brick1: 10.104.0.1:/gluster/brick/brick1 >> Brick2: 10.104.0.2:/gluster/brick/brick1 >> Brick3: 10.104.0.3:/gluster/brick/brick1 >> Brick4: 10.104.0.1:/gluster/brick/brick2 >> Brick5: 10.104.0.2:/gluster/brick/brick2 >> Brick6: 10.104.0.3:/gluster/brick/brick2 >> Brick7: 10.104.0.1:/gluster/brick/brick3 >> Brick8: 10.104.0.2:/gluster/brick/brick3 >> Brick9: 10.104.0.3:/gluster/brick/brick3 >> Options Reconfigured: >> transport.address-family: inet >> performance.readdir-ahead: on >> nfs.disable: on >> storage.owner-uid: 36 >> storage.owner-gid: 36 >> performance.quick-read: off >> performance.read-ahead: off >> performance.io-cache: off >> performance.stat-prefetch: off >> performance.low-prio-threads: 32 >> network.remote-dio: enable >> cluster.eager-lock: enable >> cluster.quorum-type: auto >> cluster.server-quorum-type: server >> cluster.data-self-heal-algorithm: full >> cluster.locking-scheme: granular >> cluster.shd-max-threads: 8 >> cluster.shd-wait-qlength: 10000 >> features.shard: on >> user.cifs: off >> server.allow-insecure: on >> >> Volume Name: volume2 >> Type: Distributed-Replicate >> Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 3 x 3 = 9 >> Transport-type: tcp >> Bricks: >> Brick1: 10.104.0.1:/gluster2/brick/brick1 >> Brick2: 10.104.0.2:/gluster2/brick/brick1 >> Brick3: 10.104.0.3:/gluster2/brick/brick1 >> Brick4: 10.104.0.1:/gluster2/brick/brick2 >> Brick5: 10.104.0.2:/gluster2/brick/brick2 >> Brick6: 10.104.0.3:/gluster2/brick/brick2 >> Brick7: 10.104.0.1:/gluster2/brick/brick3 >> Brick8: 10.104.0.2:/gluster2/brick/brick3 >> Brick9: 10.104.0.3:/gluster2/brick/brick3 >> Options Reconfigured: >> nfs.disable: on >> performance.readdir-ahead: on >> transport.address-family: inet >> cluster.quorum-type: auto >> network.ping-timeout: 10 >> auth.allow: * >> performance.quick-read: off >> performance.read-ahead: off >> performance.io-cache: off >> performance.stat-prefetch: off >> performance.low-prio-threads: 32 >> network.remote-dio: enable >> cluster.eager-lock: enable >> cluster.server-quorum-type: server >> cluster.data-self-heal-algorithm: full >> cluster.locking-scheme: granular >> cluster.shd-max-threads: 8 >> cluster.shd-wait-qlength: 10000 >> features.shard: on >> user.cifs: off >> storage.owner-uid: 36 >> storage.owner-gid: 36 >> server.allow-insecure: on >> [root@n1 ~]# gluster volume status >> Status of volume: volume1 >> Gluster process TCP Port RDMA Port >> Online Pid >> ------------------------------------------------------------ >> ------------------ >> Brick 10.104.0.1:/gluster/brick/brick1 49152 0 >> Y 3464 >> Brick 10.104.0.2:/gluster/brick/brick1 49152 0 >> Y 68937 >> Brick 10.104.0.3:/gluster/brick/brick1 49161 0 >> Y 94506 >> Brick 10.104.0.1:/gluster/brick/brick2 49153 0 >> Y 3457 >> Brick 10.104.0.2:/gluster/brick/brick2 49153 0 >> Y 68943 >> Brick 10.104.0.3:/gluster/brick/brick2 49162 0 >> Y 94514 >> Brick 10.104.0.1:/gluster/brick/brick3 49154 0 >> Y 3465 >> Brick 10.104.0.2:/gluster/brick/brick3 49154 0 >> Y 68949 >> Brick 10.104.0.3:/gluster/brick/brick3 49163 0 >> Y 94520 >> Self-heal Daemon on localhost N/A N/A Y >> 54356 >> Self-heal Daemon on 10.104.0.2 N/A N/A Y >> 962 >> Self-heal Daemon on 10.104.0.3 N/A N/A Y >> 108977 >> Self-heal Daemon on 10.104.0.4 N/A N/A Y >> 61603 >> >> Task Status of Volume volume1 >> ------------------------------------------------------------ >> ------------------ >> There are no active volume tasks >> >> Status of volume: volume2 >> Gluster process TCP Port RDMA Port >> Online Pid >> ------------------------------------------------------------ >> ------------------ >> Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 >> Y 3852 >> Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 >> Y 68955 >> Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 >> Y 94527 >> Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 >> Y 3851 >> Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 >> Y 68961 >> Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 >> Y 94533 >> Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 >> Y 3883 >> Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 >> Y 68968 >> Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 >> Y 94541 >> Self-heal Daemon on localhost N/A N/A Y >> 54356 >> Self-heal Daemon on 10.104.0.2 N/A N/A Y >> 962 >> Self-heal Daemon on 10.104.0.3 N/A N/A Y >> 108977 >> Self-heal Daemon on 10.104.0.4 N/A N/A Y >> 61603 >> >> Task Status of Volume volume2 >> ------------------------------------------------------------ >> ------------------ >> There are no active volume tasks >> >> I think ovirt can't read valid informations about gluster. >> I can't contiune upgrade of other hosts until this problem exist. >> >> Please help me:) >> >> >> Thanks >> >> Regards, >> >> Tibor >> >> >> _______________________________________________ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-leave@ovirt.org >> >> >
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives:

Hi, I have to update the engine again? Thanks, R Tibor ----- 2018. máj.. 18., 6:47, Sahina Bose <sabose@redhat.com> írta:
Thanks for reporting this. [ https://gerrit.ovirt.org/91375 | https://gerrit.ovirt.org/91375 ] fixes this. I've re-opened bug [ https://bugzilla.redhat.com/show_bug.cgi?id=1574508 | https://bugzilla.redhat.com/show_bug.cgi?id=1574508 ]
On Thu, May 17, 2018 at 10:12 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
4.2.4-0.0.master.20180515183442.git00e1340.el7.centos
Firstly, I did a yum update "ovirt-*-setup*" second, I have ran engine-setup to upgrade.
I didn't remove the old repos, just installed the nightly repo.
Thank you again,
Regards,
Tibor
----- 2018. máj.. 17., 15:02, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
It doesn't look like the patch was applied. Still see the same error in engine.log "Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null"\
Did you use engine-setup to upgrade? What's the version of ovirt-engine currently installed?
On Thu, May 17, 2018 at 5:10 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
sure,
Thank you for your time!
R Tibor
----- 2018. máj.. 17., 12:19, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
[+users]
Can you provide the engine.log to see why the monitoring is not working here. thanks!
On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Meanwhile, I did the upgrade engine, but the gluster state is same on my first node. I've attached some screenshot of my problem.
Thanks
Tibor
----- 2018. máj.. 16., 10:16, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta Hi,
> If 4.3.4 will release, i just have to remove the nightly repo and update to > stable?
> I'm sorry for my terrible English, I try to explain what was my problem with > update. > I'm upgraded from 4.1.8.
> I followed up the official hosted-engine update documentation, that was not > clear me, because it has referenced to a lot of old thing (i think). > [ https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ | > https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ ] > [ > https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-eng... > | > https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-eng... > ]
> Maybe it need to update, because I had a lot of question under upgrade and I was > not sure in all of necessary steps. For example, If I need to installing the > new, 4.2 repo on the hosts, then need to remove the old repo from that? > Why I need to do a" yum update -y" on hosts, meanwhile there is an "Updatehost" > menu in the GUI? So, maybe it outdated. > Since upgrade hosted engine, and the first node, I have problems with gluster. > It seems to working fine if you check it from console "gluster volume status, > etc" but not on the Gui, because now it yellow, and the brick reds in the first > node.
> Previously I did a mistake with glusterfs, my gluster config was wrong. I have > corrected them, but it did not helped to me,gluster bricks are reds on my first > node yet....
> Now I try to upgrade to nightly, but I'm affraid, because it a living, > productive system, and I don't have downtime. I hope it will help me.
> Thanks for all,
> Regards, > Tibor Demeter
> ----- 2018. máj.. 16., 9:58, Sahina Bose < [ mailto:sabose@redhat.com | > sabose@redhat.com ] > írta:
>> On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >> tdemeter@itsmart.hu ] > wrote:
>>> Hi,
>>> is it a different, unstable repo? I have a productive cluster, how is safe that? >>> I don't have any experience with nightly build. How can I use this? It have to >>> install to the engine VM or all of my hosts? >>> Thanks in advance for help me..
>> Only on the engine VM.
>> Regarding stability - it passes CI so relatively stable, beyond that there are >> no guarantees.
>> What's the specific problem you're facing with update? Can you elaborate?
>>> Regards,
>>> Tibor
>>> ----- 2018. máj.. 15., 9:58, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>> tdemeter@itsmart.hu ] > írta:
>>>> Hi,
>>>> Could you explain how can I use this patch?
>>>> R, >>>> Tibor
>>>> ----- 2018. máj.. 14., 11:18, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>> tdemeter@itsmart.hu ] > írta:
>>>>> Hi,
>>>>> Sorry for my question, but can you tell me please how can I use this patch?
>>>>> Thanks, >>>>> Regards, >>>>> Tibor >>>>> ----- 2018. máj.. 14., 10:47, Sahina Bose < [ mailto:sabose@redhat.com | >>>>> sabose@redhat.com ] > írta:
>>>>>> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>> Hi,
>>>>>>> Could someone help me please ? I can't finish my upgrade process.
>>>>>> [ https://gerrit.ovirt.org/91164 | https://gerrit.ovirt.org/91164 ] should fix >>>>>> the error you're facing.
>>>>>> Can you elaborate why this is affecting the upgrade process?
>>>>>>> Thanks >>>>>>> R >>>>>>> Tibor
>>>>>>> ----- 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>> tdemeter@itsmart.hu ] > írta:
>>>>>>>> Hi,
>>>>>>>> I've attached the vdsm and supervdsm logs. But I don't have engine.log here, >>>>>>>> because that is on hosted engine vm. Should I send that ?
>>>>>>>> Thank you
>>>>>>>> Regards,
>>>>>>>> Tibor >>>>>>>> ----- 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>>>> sabose@redhat.com ] > írta:
>>>>>>>>> There's a bug here. Can you log one attaching this engine.log and also vdsm.log >>>>>>>>> & supervdsm.log from n3.itsmart.cloud
>>>>>>>>> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>>>> Hi,
>>>>>>>>>> I found this:
>>>>>>>>>> 2018-05-10 03:24:19,096+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, >>>>>>>>>> log id: 347435ae >>>>>>>>>> 2018-05-10 03:24:19,097+02 ERROR >>>>>>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) >>>>>>>>>> [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of >>>>>>>>>> cluster 'C6220': null >>>>>>>>>> 2018-05-10 03:24:19,097+02 INFO >>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) >>>>>>>>>> [7715ceda] Failed to acquire lock and wait lock >>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>> sharedLocks=''}' >>>>>>>>>> 2018-05-10 03:24:19,104+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), >>>>>>>>>> log id: 6908121d >>>>>>>>>> 2018-05-10 03:24:19,106+02 ERROR >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' >>>>>>>>>> execution failed: null >>>>>>>>>> 2018-05-10 03:24:19,106+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d >>>>>>>>>> 2018-05-10 03:24:19,107+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), >>>>>>>>>> log id: 735c6a5f >>>>>>>>>> 2018-05-10 03:24:19,109+02 ERROR >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' >>>>>>>>>> execution failed: null >>>>>>>>>> 2018-05-10 03:24:19,109+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f >>>>>>>>>> 2018-05-10 03:24:19,110+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>> log id: 6f9e9f58 >>>>>>>>>> 2018-05-10 03:24:19,112+02 ERROR >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' >>>>>>>>>> execution failed: null >>>>>>>>>> 2018-05-10 03:24:19,112+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 >>>>>>>>>> 2018-05-10 03:24:19,113+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), >>>>>>>>>> log id: 2ee46967 >>>>>>>>>> 2018-05-10 03:24:19,115+02 ERROR >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' >>>>>>>>>> execution failed: null >>>>>>>>>> 2018-05-10 03:24:19,116+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 >>>>>>>>>> 2018-05-10 03:24:19,117+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>>>> GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', >>>>>>>>>> volumeName='volume1'}), log id: 7550e5c >>>>>>>>>> 2018-05-10 03:24:20,748+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, >>>>>>>>>> log id: 7550e5c >>>>>>>>>> 2018-05-10 03:24:20,749+02 ERROR >>>>>>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) >>>>>>>>>> [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of >>>>>>>>>> cluster 'C6220': null >>>>>>>>>> 2018-05-10 03:24:20,750+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] START, >>>>>>>>>> GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>> log id: 120cc68d >>>>>>>>>> 2018-05-10 03:24:20,930+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, >>>>>>>>>> return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , >>>>>>>>>> n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log >>>>>>>>>> id: 120cc68d >>>>>>>>>> 2018-05-10 03:24:20,949+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] START, >>>>>>>>>> GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>> GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>> log id: 118aa264 >>>>>>>>>> 2018-05-10 03:24:21,048+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster/brick/brick1' of volume >>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 03:24:21,055+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster/brick/brick2' of volume >>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 03:24:21,061+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster/brick/brick3' of volume >>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 03:24:21,067+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster2/brick/brick1' of volume >>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 03:24:21,074+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster2/brick/brick2' of volume >>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 03:24:21,080+02 WARN >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>> '10.104.0.1:/gluster2/brick/brick3' of volume >>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>> 2018-05-10 03:24:21,081+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, >>>>>>>>>> return: >>>>>>>>>> {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, >>>>>>>>>> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g >>>>>>>>>> luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
>>>>>>>>>> 2018-05-10 11:59:26,047+02 ERROR >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' >>>>>>>>>> execution failed: null >>>>>>>>>> 2018-05-10 11:59:26,047+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 >>>>>>>>>> 2018-05-10 11:59:26,048+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), >>>>>>>>>> log id: 28d9e255 >>>>>>>>>> 2018-05-10 11:59:26,051+02 ERROR >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' >>>>>>>>>> execution failed: null >>>>>>>>>> 2018-05-10 11:59:26,051+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 >>>>>>>>>> 2018-05-10 11:59:26,052+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>> log id: 4a7b280e >>>>>>>>>> 2018-05-10 11:59:26,054+02 ERROR >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' >>>>>>>>>> execution failed: null >>>>>>>>>> 2018-05-10 11:59:26,054+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e >>>>>>>>>> 2018-05-10 11:59:26,055+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), >>>>>>>>>> log id: 18adc534 >>>>>>>>>> 2018-05-10 11:59:26,057+02 ERROR >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' >>>>>>>>>> execution failed: null >>>>>>>>>> 2018-05-10 11:59:26,057+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 >>>>>>>>>> 2018-05-10 11:59:26,058+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>>>> GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', >>>>>>>>>> volumeName='volume1'}), log id: 3451084f >>>>>>>>>> 2018-05-10 11:59:28,050+02 INFO >>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>> sharedLocks=''}' >>>>>>>>>> 2018-05-10 11:59:28,060+02 INFO >>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>> sharedLocks=''}' >>>>>>>>>> 2018-05-10 11:59:28,062+02 INFO >>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>> sharedLocks=''}' >>>>>>>>>> 2018-05-10 11:59:31,054+02 INFO >>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>> sharedLocks=''}' >>>>>>>>>> 2018-05-10 11:59:31,054+02 INFO >>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>> sharedLocks=''}' >>>>>>>>>> 2018-05-10 11:59:31,062+02 INFO >>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>> sharedLocks=''}' >>>>>>>>>> 2018-05-10 11:59:31,064+02 INFO >>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>> sharedLocks=''}' >>>>>>>>>> 2018-05-10 11:59:31,465+02 INFO >>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, >>>>>>>>>> log id: 3451084f >>>>>>>>>> 2018-05-10 11:59:31,466+02 ERROR >>>>>>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) >>>>>>>>>> [400fa486] Error while refreshing brick statuses for volume 'volume1' of >>>>>>>>>> cluster 'C6220': null
>>>>>>>>>> R >>>>>>>>>> Tibor
>>>>>>>>>> ----- 2018. máj.. 10., 11:43, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>>>>>> sabose@redhat.com ] > írta:
>>>>>>>>>>> This doesn't affect the monitoring of state. >>>>>>>>>>> Any errors in vdsm.log? >>>>>>>>>>> Or errors in engine.log of the form "Error while refreshing brick statuses for >>>>>>>>>>> volume"
>>>>>>>>>>> On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>>>>>> Hi,
>>>>>>>>>>>> Thank you for your fast reply :)
>>>>>>>>>>>> 2018-05-10 11:01:51,574+02 INFO >>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] START, >>>>>>>>>>>> GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>>>> log id: 39adbbb8 >>>>>>>>>>>> 2018-05-10 11:01:51,768+02 INFO >>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, >>>>>>>>>>>> return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , >>>>>>>>>>>> n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log >>>>>>>>>>>> id: 39adbbb8 >>>>>>>>>>>> 2018-05-10 11:01:51,788+02 INFO >>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] START, >>>>>>>>>>>> GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>>>> GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>>>> log id: 738a7261 >>>>>>>>>>>> 2018-05-10 11:01:51,892+02 WARN >>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>> '10.104.0.1:/gluster/brick/brick1' of volume >>>>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>> 2018-05-10 11:01:51,898+02 WARN >>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>> '10.104.0.1:/gluster/brick/brick2' of volume >>>>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>> 2018-05-10 11:01:51,905+02 WARN >>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>> '10.104.0.1:/gluster/brick/brick3' of volume >>>>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>> 2018-05-10 11:01:51,911+02 WARN >>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>> '10.104.0.1:/gluster2/brick/brick1' of volume >>>>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>> 2018-05-10 11:01:51,917+02 WARN >>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>> '10.104.0.1:/gluster2/brick/brick2' of volume >>>>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>> 2018-05-10 11:01:51,924+02 WARN >>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>> '10.104.0.1:/gluster2/brick/brick3' of volume >>>>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>> 2018-05-10 11:01:51,925+02 INFO >>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, >>>>>>>>>>>> return: >>>>>>>>>>>> {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, >>>>>>>>>>>> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, >>>>>>>>>>>> log id: 738a7261
>>>>>>>>>>>> This happening continuously.
>>>>>>>>>>>> Thanks! >>>>>>>>>>>> Tibor
>>>>>>>>>>>> ----- 2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>>>>>>>> sabose@redhat.com ] > írta:
>>>>>>>>>>>>> Could you check the engine.log if there are errors related to getting >>>>>>>>>>>>> GlusterVolumeAdvancedDetails ?
>>>>>>>>>>>>> On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>>>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>>>>>>>> Dear Ovirt Users, >>>>>>>>>>>>>> I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 >>>>>>>>>>>>>> system to 4.2.3. >>>>>>>>>>>>>> I upgaded the first node with yum upgrade, it seems working now fine. But since >>>>>>>>>>>>>> upgrade, the gluster informations seems to displayed incorrect on the admin >>>>>>>>>>>>>> panel. The volume yellow, and there are red bricks from that node. >>>>>>>>>>>>>> I've checked in console, I think my gluster is not degraded:
>>>>>>>>>>>>>> root@n1 ~]# gluster volume list >>>>>>>>>>>>>> volume1 >>>>>>>>>>>>>> volume2 >>>>>>>>>>>>>> [root@n1 ~]# gluster volume info >>>>>>>>>>>>>> Volume Name: volume1 >>>>>>>>>>>>>> Type: Distributed-Replicate >>>>>>>>>>>>>> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 >>>>>>>>>>>>>> Status: Started >>>>>>>>>>>>>> Snapshot Count: 0 >>>>>>>>>>>>>> Number of Bricks: 3 x 3 = 9 >>>>>>>>>>>>>> Transport-type: tcp >>>>>>>>>>>>>> Bricks: >>>>>>>>>>>>>> Brick1: 10.104.0.1:/gluster/brick/brick1 >>>>>>>>>>>>>> Brick2: 10.104.0.2:/gluster/brick/brick1 >>>>>>>>>>>>>> Brick3: 10.104.0.3:/gluster/brick/brick1 >>>>>>>>>>>>>> Brick4: 10.104.0.1:/gluster/brick/brick2 >>>>>>>>>>>>>> Brick5: 10.104.0.2:/gluster/brick/brick2 >>>>>>>>>>>>>> Brick6: 10.104.0.3:/gluster/brick/brick2 >>>>>>>>>>>>>> Brick7: 10.104.0.1:/gluster/brick/brick3 >>>>>>>>>>>>>> Brick8: 10.104.0.2:/gluster/brick/brick3 >>>>>>>>>>>>>> Brick9: 10.104.0.3:/gluster/brick/brick3 >>>>>>>>>>>>>> Options Reconfigured: >>>>>>>>>>>>>> transport.address-family: inet >>>>>>>>>>>>>> performance.readdir-ahead: on >>>>>>>>>>>>>> nfs.disable: on >>>>>>>>>>>>>> storage.owner-uid: 36 >>>>>>>>>>>>>> storage.owner-gid: 36 >>>>>>>>>>>>>> performance.quick-read: off >>>>>>>>>>>>>> performance.read-ahead: off >>>>>>>>>>>>>> performance.io-cache: off >>>>>>>>>>>>>> performance.stat-prefetch: off >>>>>>>>>>>>>> performance.low-prio-threads: 32 >>>>>>>>>>>>>> network.remote-dio: enable >>>>>>>>>>>>>> cluster.eager-lock: enable >>>>>>>>>>>>>> cluster.quorum-type: auto >>>>>>>>>>>>>> cluster.server-quorum-type: server >>>>>>>>>>>>>> cluster.data-self-heal-algorithm: full >>>>>>>>>>>>>> cluster.locking-scheme: granular >>>>>>>>>>>>>> cluster.shd-max-threads: 8 >>>>>>>>>>>>>> cluster.shd-wait-qlength: 10000 >>>>>>>>>>>>>> features.shard: on >>>>>>>>>>>>>> user.cifs: off >>>>>>>>>>>>>> server.allow-insecure: on >>>>>>>>>>>>>> Volume Name: volume2 >>>>>>>>>>>>>> Type: Distributed-Replicate >>>>>>>>>>>>>> Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 >>>>>>>>>>>>>> Status: Started >>>>>>>>>>>>>> Snapshot Count: 0 >>>>>>>>>>>>>> Number of Bricks: 3 x 3 = 9 >>>>>>>>>>>>>> Transport-type: tcp >>>>>>>>>>>>>> Bricks: >>>>>>>>>>>>>> Brick1: 10.104.0.1:/gluster2/brick/brick1 >>>>>>>>>>>>>> Brick2: 10.104.0.2:/gluster2/brick/brick1 >>>>>>>>>>>>>> Brick3: 10.104.0.3:/gluster2/brick/brick1 >>>>>>>>>>>>>> Brick4: 10.104.0.1:/gluster2/brick/brick2 >>>>>>>>>>>>>> Brick5: 10.104.0.2:/gluster2/brick/brick2 >>>>>>>>>>>>>> Brick6: 10.104.0.3:/gluster2/brick/brick2 >>>>>>>>>>>>>> Brick7: 10.104.0.1:/gluster2/brick/brick3 >>>>>>>>>>>>>> Brick8: 10.104.0.2:/gluster2/brick/brick3 >>>>>>>>>>>>>> Brick9: 10.104.0.3:/gluster2/brick/brick3 >>>>>>>>>>>>>> Options Reconfigured: >>>>>>>>>>>>>> nfs.disable: on >>>>>>>>>>>>>> performance.readdir-ahead: on >>>>>>>>>>>>>> transport.address-family: inet >>>>>>>>>>>>>> cluster.quorum-type: auto >>>>>>>>>>>>>> network.ping-timeout: 10 >>>>>>>>>>>>>> auth.allow: * >>>>>>>>>>>>>> performance.quick-read: off >>>>>>>>>>>>>> performance.read-ahead: off >>>>>>>>>>>>>> performance.io-cache: off >>>>>>>>>>>>>> performance.stat-prefetch: off >>>>>>>>>>>>>> performance.low-prio-threads: 32 >>>>>>>>>>>>>> network.remote-dio: enable >>>>>>>>>>>>>> cluster.eager-lock: enable >>>>>>>>>>>>>> cluster.server-quorum-type: server >>>>>>>>>>>>>> cluster.data-self-heal-algorithm: full >>>>>>>>>>>>>> cluster.locking-scheme: granular >>>>>>>>>>>>>> cluster.shd-max-threads: 8 >>>>>>>>>>>>>> cluster.shd-wait-qlength: 10000 >>>>>>>>>>>>>> features.shard: on >>>>>>>>>>>>>> user.cifs: off >>>>>>>>>>>>>> storage.owner-uid: 36 >>>>>>>>>>>>>> storage.owner-gid: 36 >>>>>>>>>>>>>> server.allow-insecure: on >>>>>>>>>>>>>> [root@n1 ~]# gluster volume status >>>>>>>>>>>>>> Status of volume: volume1 >>>>>>>>>>>>>> Gluster process TCP Port RDMA Port Online Pid >>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 >>>>>>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 >>>>>>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 >>>>>>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 >>>>>>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 >>>>>>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 >>>>>>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 >>>>>>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 >>>>>>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 >>>>>>>>>>>>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>>>>>>>>>>>> Task Status of Volume volume1 >>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>> There are no active volume tasks >>>>>>>>>>>>>> Status of volume: volume2 >>>>>>>>>>>>>> Gluster process TCP Port RDMA Port Online Pid >>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 >>>>>>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 >>>>>>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 >>>>>>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 >>>>>>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 >>>>>>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 >>>>>>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 >>>>>>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 >>>>>>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 >>>>>>>>>>>>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>>>>>>>>>>>> Task Status of Volume volume2 >>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>> There are no active volume tasks >>>>>>>>>>>>>> I think ovirt can't read valid informations about gluster. >>>>>>>>>>>>>> I can't contiune upgrade of other hosts until this problem exist.
>>>>>>>>>>>>>> Please help me:)
>>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>> Tibor
>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>>>>>>>>>>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>>>>>>>>>>>> users-leave@ovirt.org ]
>>>>>>>> _______________________________________________ >>>>>>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>>>>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>>>>>> users-leave@ovirt.org ]
>>>>> _______________________________________________ >>>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>>> users-leave@ovirt.org ]
>>>> _______________________________________________ >>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>> users-leave@ovirt.org ] >>>> oVirt Code of Conduct: [ >>>> https://www.ovirt.org/community/about/community-guidelines/ | >>>> https://www.ovirt.org/community/about/community-guidelines/ ] >>>> List Archives:

Dear Sahina, Is there any changes with this bug? Still I haven't finish my upgrade process that i've started on 9th may:( Please help me if you can. Thanks Tibor ----- 2018. máj.. 18., 9:29, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I have to update the engine again?
Thanks,
R Tibor
----- 2018. máj.. 18., 6:47, Sahina Bose <sabose@redhat.com> írta:
Thanks for reporting this. [ https://gerrit.ovirt.org/91375 | https://gerrit.ovirt.org/91375 ] fixes this. I've re-opened bug [ https://bugzilla.redhat.com/show_bug.cgi?id=1574508 | https://bugzilla.redhat.com/show_bug.cgi?id=1574508 ]
On Thu, May 17, 2018 at 10:12 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
4.2.4-0.0.master.20180515183442.git00e1340.el7.centos
Firstly, I did a yum update "ovirt-*-setup*" second, I have ran engine-setup to upgrade.
I didn't remove the old repos, just installed the nightly repo.
Thank you again,
Regards,
Tibor
----- 2018. máj.. 17., 15:02, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
It doesn't look like the patch was applied. Still see the same error in engine.log "Error while refreshing brick statuses for volume 'volume1' of cluster 'C6220': null"\
Did you use engine-setup to upgrade? What's the version of ovirt-engine currently installed?
On Thu, May 17, 2018 at 5:10 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
sure,
Thank you for your time!
R Tibor
----- 2018. máj.. 17., 12:19, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
[+users]
Can you provide the engine.log to see why the monitoring is not working here. thanks!
On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
> Hi,
> Meanwhile, I did the upgrade engine, but the gluster state is same on my first > node. > I've attached some screenshot of my problem.
> Thanks
> Tibor
> ----- 2018. máj.. 16., 10:16, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | > tdemeter@itsmart.hu ] > írta Hi,
>> If 4.3.4 will release, i just have to remove the nightly repo and update to >> stable?
>> I'm sorry for my terrible English, I try to explain what was my problem with >> update. >> I'm upgraded from 4.1.8.
>> I followed up the official hosted-engine update documentation, that was not >> clear me, because it has referenced to a lot of old thing (i think). >> [ https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ | >> https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ ] >> [ >> https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-eng... >> | >> https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-eng... >> ]
>> Maybe it need to update, because I had a lot of question under upgrade and I was >> not sure in all of necessary steps. For example, If I need to installing the >> new, 4.2 repo on the hosts, then need to remove the old repo from that? >> Why I need to do a" yum update -y" on hosts, meanwhile there is an "Updatehost" >> menu in the GUI? So, maybe it outdated. >> Since upgrade hosted engine, and the first node, I have problems with gluster. >> It seems to working fine if you check it from console "gluster volume status, >> etc" but not on the Gui, because now it yellow, and the brick reds in the first >> node.
>> Previously I did a mistake with glusterfs, my gluster config was wrong. I have >> corrected them, but it did not helped to me,gluster bricks are reds on my first >> node yet....
>> Now I try to upgrade to nightly, but I'm affraid, because it a living, >> productive system, and I don't have downtime. I hope it will help me.
>> Thanks for all,
>> Regards, >> Tibor Demeter
>> ----- 2018. máj.. 16., 9:58, Sahina Bose < [ mailto:sabose@redhat.com | >> sabose@redhat.com ] > írta:
>>> On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>> tdemeter@itsmart.hu ] > wrote:
>>>> Hi,
>>>> is it a different, unstable repo? I have a productive cluster, how is safe that? >>>> I don't have any experience with nightly build. How can I use this? It have to >>>> install to the engine VM or all of my hosts? >>>> Thanks in advance for help me..
>>> Only on the engine VM.
>>> Regarding stability - it passes CI so relatively stable, beyond that there are >>> no guarantees.
>>> What's the specific problem you're facing with update? Can you elaborate?
>>>> Regards,
>>>> Tibor
>>>> ----- 2018. máj.. 15., 9:58, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>> tdemeter@itsmart.hu ] > írta:
>>>>> Hi,
>>>>> Could you explain how can I use this patch?
>>>>> R, >>>>> Tibor
>>>>> ----- 2018. máj.. 14., 11:18, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>> tdemeter@itsmart.hu ] > írta:
>>>>>> Hi,
>>>>>> Sorry for my question, but can you tell me please how can I use this patch?
>>>>>> Thanks, >>>>>> Regards, >>>>>> Tibor >>>>>> ----- 2018. máj.. 14., 10:47, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>> sabose@redhat.com ] > írta:
>>>>>>> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>> Hi,
>>>>>>>> Could someone help me please ? I can't finish my upgrade process.
>>>>>>> [ https://gerrit.ovirt.org/91164 | https://gerrit.ovirt.org/91164 ] should fix >>>>>>> the error you're facing.
>>>>>>> Can you elaborate why this is affecting the upgrade process?
>>>>>>>> Thanks >>>>>>>> R >>>>>>>> Tibor
>>>>>>>> ----- 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>>> tdemeter@itsmart.hu ] > írta:
>>>>>>>>> Hi,
>>>>>>>>> I've attached the vdsm and supervdsm logs. But I don't have engine.log here, >>>>>>>>> because that is on hosted engine vm. Should I send that ?
>>>>>>>>> Thank you
>>>>>>>>> Regards,
>>>>>>>>> Tibor >>>>>>>>> ----- 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>>>>> sabose@redhat.com ] > írta:
>>>>>>>>>> There's a bug here. Can you log one attaching this engine.log and also vdsm.log >>>>>>>>>> & supervdsm.log from n3.itsmart.cloud
>>>>>>>>>> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>>>>> Hi,
>>>>>>>>>>> I found this:
>>>>>>>>>>> 2018-05-10 03:24:19,096+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>>>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e, >>>>>>>>>>> log id: 347435ae >>>>>>>>>>> 2018-05-10 03:24:19,097+02 ERROR >>>>>>>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) >>>>>>>>>>> [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of >>>>>>>>>>> cluster 'C6220': null >>>>>>>>>>> 2018-05-10 03:24:19,097+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) >>>>>>>>>>> [7715ceda] Failed to acquire lock and wait lock >>>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>>> sharedLocks=''}' >>>>>>>>>>> 2018-05-10 03:24:19,104+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), >>>>>>>>>>> log id: 6908121d >>>>>>>>>>> 2018-05-10 03:24:19,106+02 ERROR >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' >>>>>>>>>>> execution failed: null >>>>>>>>>>> 2018-05-10 03:24:19,106+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d >>>>>>>>>>> 2018-05-10 03:24:19,107+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), >>>>>>>>>>> log id: 735c6a5f >>>>>>>>>>> 2018-05-10 03:24:19,109+02 ERROR >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' >>>>>>>>>>> execution failed: null >>>>>>>>>>> 2018-05-10 03:24:19,109+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f >>>>>>>>>>> 2018-05-10 03:24:19,110+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>>> log id: 6f9e9f58 >>>>>>>>>>> 2018-05-10 03:24:19,112+02 ERROR >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' >>>>>>>>>>> execution failed: null >>>>>>>>>>> 2018-05-10 03:24:19,112+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 >>>>>>>>>>> 2018-05-10 03:24:19,113+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), >>>>>>>>>>> log id: 2ee46967 >>>>>>>>>>> 2018-05-10 03:24:19,115+02 ERROR >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] Command >>>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' >>>>>>>>>>> execution failed: null >>>>>>>>>>> 2018-05-10 03:24:19,116+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 >>>>>>>>>>> 2018-05-10 03:24:19,117+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] START, >>>>>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>>>>> GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57', >>>>>>>>>>> volumeName='volume1'}), log id: 7550e5c >>>>>>>>>>> 2018-05-10 03:24:20,748+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, >>>>>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>>>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f, >>>>>>>>>>> log id: 7550e5c >>>>>>>>>>> 2018-05-10 03:24:20,749+02 ERROR >>>>>>>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) >>>>>>>>>>> [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of >>>>>>>>>>> cluster 'C6220': null >>>>>>>>>>> 2018-05-10 03:24:20,750+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] START, >>>>>>>>>>> GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>>> log id: 120cc68d >>>>>>>>>>> 2018-05-10 03:24:20,930+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterServersListVDSCommand, >>>>>>>>>>> return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , >>>>>>>>>>> n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log >>>>>>>>>>> id: 120cc68d >>>>>>>>>>> 2018-05-10 03:24:20,949+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] START, >>>>>>>>>>> GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>>> GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>>> log id: 118aa264 >>>>>>>>>>> 2018-05-10 03:24:21,048+02 WARN >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>>> '10.104.0.1:/gluster/brick/brick1' of volume >>>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>> 2018-05-10 03:24:21,055+02 WARN >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>>> '10.104.0.1:/gluster/brick/brick2' of volume >>>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>> 2018-05-10 03:24:21,061+02 WARN >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>>> '10.104.0.1:/gluster/brick/brick3' of volume >>>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>> 2018-05-10 03:24:21,067+02 WARN >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>>> '10.104.0.1:/gluster2/brick/brick1' of volume >>>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>> 2018-05-10 03:24:21,074+02 WARN >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>>> '10.104.0.1:/gluster2/brick/brick2' of volume >>>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>> 2018-05-10 03:24:21,080+02 WARN >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick >>>>>>>>>>> '10.104.0.1:/gluster2/brick/brick3' of volume >>>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>> 2018-05-10 03:24:21,081+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda] FINISH, GlusterVolumesListVDSCommand, >>>>>>>>>>> return: >>>>>>>>>>> {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, >>>>>>>>>>> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g >>>>>>>>>>> luster.GlusterVolumeEntity@f88c521b}, log id: 118aa264
>>>>>>>>>>> 2018-05-10 11:59:26,047+02 ERROR >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' >>>>>>>>>>> execution failed: null >>>>>>>>>>> 2018-05-10 11:59:26,047+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0 >>>>>>>>>>> 2018-05-10 11:59:26,048+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), >>>>>>>>>>> log id: 28d9e255 >>>>>>>>>>> 2018-05-10 11:59:26,051+02 ERROR >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' >>>>>>>>>>> execution failed: null >>>>>>>>>>> 2018-05-10 11:59:26,051+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255 >>>>>>>>>>> 2018-05-10 11:59:26,052+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>>> log id: 4a7b280e >>>>>>>>>>> 2018-05-10 11:59:26,054+02 ERROR >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' >>>>>>>>>>> execution failed: null >>>>>>>>>>> 2018-05-10 11:59:26,054+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e >>>>>>>>>>> 2018-05-10 11:59:26,055+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), >>>>>>>>>>> log id: 18adc534 >>>>>>>>>>> 2018-05-10 11:59:26,057+02 ERROR >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] Command >>>>>>>>>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' >>>>>>>>>>> execution failed: null >>>>>>>>>>> 2018-05-10 11:59:26,057+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>>>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534 >>>>>>>>>>> 2018-05-10 11:59:26,058+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] START, >>>>>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud, >>>>>>>>>>> GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec', >>>>>>>>>>> volumeName='volume1'}), log id: 3451084f >>>>>>>>>>> 2018-05-10 11:59:28,050+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>>> sharedLocks=''}' >>>>>>>>>>> 2018-05-10 11:59:28,060+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>>> sharedLocks=''}' >>>>>>>>>>> 2018-05-10 11:59:28,062+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>>> sharedLocks=''}' >>>>>>>>>>> 2018-05-10 11:59:31,054+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>>> sharedLocks=''}' >>>>>>>>>>> 2018-05-10 11:59:31,054+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>>> sharedLocks=''}' >>>>>>>>>>> 2018-05-10 11:59:31,062+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>>> sharedLocks=''}' >>>>>>>>>>> 2018-05-10 11:59:31,064+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3) >>>>>>>>>>> [2eb1c389] Failed to acquire lock and wait lock >>>>>>>>>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]', >>>>>>>>>>> sharedLocks=''}' >>>>>>>>>>> 2018-05-10 11:59:31,465+02 INFO >>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] >>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486] FINISH, >>>>>>>>>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return: >>>>>>>>>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43, >>>>>>>>>>> log id: 3451084f >>>>>>>>>>> 2018-05-10 11:59:31,466+02 ERROR >>>>>>>>>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4) >>>>>>>>>>> [400fa486] Error while refreshing brick statuses for volume 'volume1' of >>>>>>>>>>> cluster 'C6220': null
>>>>>>>>>>> R >>>>>>>>>>> Tibor
>>>>>>>>>>> ----- 2018. máj.. 10., 11:43, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>>>>>>> sabose@redhat.com ] > írta:
>>>>>>>>>>>> This doesn't affect the monitoring of state. >>>>>>>>>>>> Any errors in vdsm.log? >>>>>>>>>>>> Or errors in engine.log of the form "Error while refreshing brick statuses for >>>>>>>>>>>> volume"
>>>>>>>>>>>> On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>> Thank you for your fast reply :)
>>>>>>>>>>>>> 2018-05-10 11:01:51,574+02 INFO >>>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] START, >>>>>>>>>>>>> GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>>>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>>>>> log id: 39adbbb8 >>>>>>>>>>>>> 2018-05-10 11:01:51,768+02 INFO >>>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, >>>>>>>>>>>>> return: [ [ http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] , >>>>>>>>>>>>> n1.cloudata.local:CONNECTED, 10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log >>>>>>>>>>>>> id: 39adbbb8 >>>>>>>>>>>>> 2018-05-10 11:01:51,788+02 INFO >>>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] START, >>>>>>>>>>>>> GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, >>>>>>>>>>>>> GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), >>>>>>>>>>>>> log id: 738a7261 >>>>>>>>>>>>> 2018-05-10 11:01:51,892+02 WARN >>>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>>> '10.104.0.1:/gluster/brick/brick1' of volume >>>>>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>>> 2018-05-10 11:01:51,898+02 WARN >>>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>>> '10.104.0.1:/gluster/brick/brick2' of volume >>>>>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>>> 2018-05-10 11:01:51,905+02 WARN >>>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>>> '10.104.0.1:/gluster/brick/brick3' of volume >>>>>>>>>>>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster >>>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>>> 2018-05-10 11:01:51,911+02 WARN >>>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>>> '10.104.0.1:/gluster2/brick/brick1' of volume >>>>>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>>> 2018-05-10 11:01:51,917+02 WARN >>>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>>> '10.104.0.1:/gluster2/brick/brick2' of volume >>>>>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>>> 2018-05-10 11:01:51,924+02 WARN >>>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] >>>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick >>>>>>>>>>>>> '10.104.0.1:/gluster2/brick/brick3' of volume >>>>>>>>>>>>> '68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster >>>>>>>>>>>>> network found in cluster '59c10db3-0324-0320-0120-000000000339' >>>>>>>>>>>>> 2018-05-10 11:01:51,925+02 INFO >>>>>>>>>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>>>>>>>> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, >>>>>>>>>>>>> return: >>>>>>>>>>>>> {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d, >>>>>>>>>>>>> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, >>>>>>>>>>>>> log id: 738a7261
>>>>>>>>>>>>> This happening continuously.
>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>> Tibor
>>>>>>>>>>>>> ----- 2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com | >>>>>>>>>>>>> sabose@redhat.com ] > írta:
>>>>>>>>>>>>>> Could you check the engine.log if there are errors related to getting >>>>>>>>>>>>>> GlusterVolumeAdvancedDetails ?
>>>>>>>>>>>>>> On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>>>>>>>>>>>>> tdemeter@itsmart.hu ] > wrote:
>>>>>>>>>>>>>>> Dear Ovirt Users, >>>>>>>>>>>>>>> I've followed up the self-hosted-engine upgrade documentation, I upgraded my 4.1 >>>>>>>>>>>>>>> system to 4.2.3. >>>>>>>>>>>>>>> I upgaded the first node with yum upgrade, it seems working now fine. But since >>>>>>>>>>>>>>> upgrade, the gluster informations seems to displayed incorrect on the admin >>>>>>>>>>>>>>> panel. The volume yellow, and there are red bricks from that node. >>>>>>>>>>>>>>> I've checked in console, I think my gluster is not degraded:
>>>>>>>>>>>>>>> root@n1 ~]# gluster volume list >>>>>>>>>>>>>>> volume1 >>>>>>>>>>>>>>> volume2 >>>>>>>>>>>>>>> [root@n1 ~]# gluster volume info >>>>>>>>>>>>>>> Volume Name: volume1 >>>>>>>>>>>>>>> Type: Distributed-Replicate >>>>>>>>>>>>>>> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27 >>>>>>>>>>>>>>> Status: Started >>>>>>>>>>>>>>> Snapshot Count: 0 >>>>>>>>>>>>>>> Number of Bricks: 3 x 3 = 9 >>>>>>>>>>>>>>> Transport-type: tcp >>>>>>>>>>>>>>> Bricks: >>>>>>>>>>>>>>> Brick1: 10.104.0.1:/gluster/brick/brick1 >>>>>>>>>>>>>>> Brick2: 10.104.0.2:/gluster/brick/brick1 >>>>>>>>>>>>>>> Brick3: 10.104.0.3:/gluster/brick/brick1 >>>>>>>>>>>>>>> Brick4: 10.104.0.1:/gluster/brick/brick2 >>>>>>>>>>>>>>> Brick5: 10.104.0.2:/gluster/brick/brick2 >>>>>>>>>>>>>>> Brick6: 10.104.0.3:/gluster/brick/brick2 >>>>>>>>>>>>>>> Brick7: 10.104.0.1:/gluster/brick/brick3 >>>>>>>>>>>>>>> Brick8: 10.104.0.2:/gluster/brick/brick3 >>>>>>>>>>>>>>> Brick9: 10.104.0.3:/gluster/brick/brick3 >>>>>>>>>>>>>>> Options Reconfigured: >>>>>>>>>>>>>>> transport.address-family: inet >>>>>>>>>>>>>>> performance.readdir-ahead: on >>>>>>>>>>>>>>> nfs.disable: on >>>>>>>>>>>>>>> storage.owner-uid: 36 >>>>>>>>>>>>>>> storage.owner-gid: 36 >>>>>>>>>>>>>>> performance.quick-read: off >>>>>>>>>>>>>>> performance.read-ahead: off >>>>>>>>>>>>>>> performance.io-cache: off >>>>>>>>>>>>>>> performance.stat-prefetch: off >>>>>>>>>>>>>>> performance.low-prio-threads: 32 >>>>>>>>>>>>>>> network.remote-dio: enable >>>>>>>>>>>>>>> cluster.eager-lock: enable >>>>>>>>>>>>>>> cluster.quorum-type: auto >>>>>>>>>>>>>>> cluster.server-quorum-type: server >>>>>>>>>>>>>>> cluster.data-self-heal-algorithm: full >>>>>>>>>>>>>>> cluster.locking-scheme: granular >>>>>>>>>>>>>>> cluster.shd-max-threads: 8 >>>>>>>>>>>>>>> cluster.shd-wait-qlength: 10000 >>>>>>>>>>>>>>> features.shard: on >>>>>>>>>>>>>>> user.cifs: off >>>>>>>>>>>>>>> server.allow-insecure: on >>>>>>>>>>>>>>> Volume Name: volume2 >>>>>>>>>>>>>>> Type: Distributed-Replicate >>>>>>>>>>>>>>> Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8 >>>>>>>>>>>>>>> Status: Started >>>>>>>>>>>>>>> Snapshot Count: 0 >>>>>>>>>>>>>>> Number of Bricks: 3 x 3 = 9 >>>>>>>>>>>>>>> Transport-type: tcp >>>>>>>>>>>>>>> Bricks: >>>>>>>>>>>>>>> Brick1: 10.104.0.1:/gluster2/brick/brick1 >>>>>>>>>>>>>>> Brick2: 10.104.0.2:/gluster2/brick/brick1 >>>>>>>>>>>>>>> Brick3: 10.104.0.3:/gluster2/brick/brick1 >>>>>>>>>>>>>>> Brick4: 10.104.0.1:/gluster2/brick/brick2 >>>>>>>>>>>>>>> Brick5: 10.104.0.2:/gluster2/brick/brick2 >>>>>>>>>>>>>>> Brick6: 10.104.0.3:/gluster2/brick/brick2 >>>>>>>>>>>>>>> Brick7: 10.104.0.1:/gluster2/brick/brick3 >>>>>>>>>>>>>>> Brick8: 10.104.0.2:/gluster2/brick/brick3 >>>>>>>>>>>>>>> Brick9: 10.104.0.3:/gluster2/brick/brick3 >>>>>>>>>>>>>>> Options Reconfigured: >>>>>>>>>>>>>>> nfs.disable: on >>>>>>>>>>>>>>> performance.readdir-ahead: on >>>>>>>>>>>>>>> transport.address-family: inet >>>>>>>>>>>>>>> cluster.quorum-type: auto >>>>>>>>>>>>>>> network.ping-timeout: 10 >>>>>>>>>>>>>>> auth.allow: * >>>>>>>>>>>>>>> performance.quick-read: off >>>>>>>>>>>>>>> performance.read-ahead: off >>>>>>>>>>>>>>> performance.io-cache: off >>>>>>>>>>>>>>> performance.stat-prefetch: off >>>>>>>>>>>>>>> performance.low-prio-threads: 32 >>>>>>>>>>>>>>> network.remote-dio: enable >>>>>>>>>>>>>>> cluster.eager-lock: enable >>>>>>>>>>>>>>> cluster.server-quorum-type: server >>>>>>>>>>>>>>> cluster.data-self-heal-algorithm: full >>>>>>>>>>>>>>> cluster.locking-scheme: granular >>>>>>>>>>>>>>> cluster.shd-max-threads: 8 >>>>>>>>>>>>>>> cluster.shd-wait-qlength: 10000 >>>>>>>>>>>>>>> features.shard: on >>>>>>>>>>>>>>> user.cifs: off >>>>>>>>>>>>>>> storage.owner-uid: 36 >>>>>>>>>>>>>>> storage.owner-gid: 36 >>>>>>>>>>>>>>> server.allow-insecure: on >>>>>>>>>>>>>>> [root@n1 ~]# gluster volume status >>>>>>>>>>>>>>> Status of volume: volume1 >>>>>>>>>>>>>>> Gluster process TCP Port RDMA Port Online Pid >>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464 >>>>>>>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937 >>>>>>>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506 >>>>>>>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457 >>>>>>>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943 >>>>>>>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514 >>>>>>>>>>>>>>> Brick 10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465 >>>>>>>>>>>>>>> Brick 10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949 >>>>>>>>>>>>>>> Brick 10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520 >>>>>>>>>>>>>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>>>>>>>>>>>>> Task Status of Volume volume1 >>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>> There are no active volume tasks >>>>>>>>>>>>>>> Status of volume: volume2 >>>>>>>>>>>>>>> Gluster process TCP Port RDMA Port Online Pid >>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852 >>>>>>>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955 >>>>>>>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527 >>>>>>>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851 >>>>>>>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961 >>>>>>>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533 >>>>>>>>>>>>>>> Brick 10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883 >>>>>>>>>>>>>>> Brick 10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968 >>>>>>>>>>>>>>> Brick 10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541 >>>>>>>>>>>>>>> Self-heal Daemon on localhost N/A N/A Y 54356 >>>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.2 N/A N/A Y 962 >>>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.3 N/A N/A Y 108977 >>>>>>>>>>>>>>> Self-heal Daemon on 10.104.0.4 N/A N/A Y 61603 >>>>>>>>>>>>>>> Task Status of Volume volume2 >>>>>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>>>>> There are no active volume tasks >>>>>>>>>>>>>>> I think ovirt can't read valid informations about gluster. >>>>>>>>>>>>>>> I can't contiune upgrade of other hosts until this problem exist.
>>>>>>>>>>>>>>> Please help me:)
>>>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>> Tibor
>>>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>>>>>>>>>>>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>>>>>>>>>>>>> users-leave@ovirt.org ]
>>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>>>>>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>>>>>>> users-leave@ovirt.org ]
>>>>>> _______________________________________________ >>>>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>>>> users-leave@ovirt.org ]
>>>>> _______________________________________________ >>>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >>>>> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >>>>> users-leave@ovirt.org ] >>>>> oVirt Code of Conduct: [ >>>>> https://www.ovirt.org/community/about/community-guidelines/ | >>>>> https://www.ovirt.org/community/about/community-guidelines/ ] >>>>> List Archives:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

Hello! On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Is there any changes with this bug?
Still I haven't finish my upgrade process that i've started on 9th may:(
Please help me if you can.
Looks like all required patches are already merged, so could you please to update your engine again to the latest night build?

Hi, I've updated again to the latest version, but there are no changes. All of bricks on my first node are down in the GUI (in console are ok) An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and all bricks, but "Space used" column is zero for all hosts/bricks. Can I force remove and re-add my host to cluster awhile it is a gluster member? Is it safe ? What can I do? I haven't update other hosts while gluster not working fine, or the GUI does not detect . So my other hosts is remained 4.1 yet :( Thanks in advance, Regards Tibor ----- 2018. máj.. 23., 14:45, Denis Chapligin <dchaplyg@redhat.com> írta:
Hello!
On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Is there any changes with this bug?
Still I haven't finish my upgrade process that i've started on 9th may:(
Please help me if you can.
Looks like all required patches are already merged, so could you please to update your engine again to the latest night build?

Hi, Somebody could answer to my question please? It is very important for me, I could no finish my upgrade process (from 4.1 to 4.2) since 9th May! Meanwhile - I don't know why - one of my two gluster volume seems UP (green) on the GUI. So, now only one is down. I need help. What can I do? Thanks in advance, Regards, Tibor ----- 2018. máj.. 23., 21:09, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I've updated again to the latest version, but there are no changes. All of bricks on my first node are down in the GUI (in console are ok) An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and all bricks, but "Space used" column is zero for all hosts/bricks. Can I force remove and re-add my host to cluster awhile it is a gluster member? Is it safe ? What can I do?
I haven't update other hosts while gluster not working fine, or the GUI does not detect . So my other hosts is remained 4.1 yet :(
Thanks in advance,
Regards
Tibor
----- 2018. máj.. 23., 14:45, Denis Chapligin <dchaplyg@redhat.com> írta:
Hello!
On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Is there any changes with this bug?
Still I haven't finish my upgrade process that i've started on 9th may:(
Please help me if you can.
Looks like all required patches are already merged, so could you please to update your engine again to the latest night build?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org

On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Somebody could answer to my question please? It is very important for me, I could no finish my upgrade process (from 4.1 to 4.2) since 9th May!
Can you explain how the upgrade process is blocked due to the monitoring? If it's because you cannot move the host to maintenance, can you try with the option "Ignore quorum checks" enabled?
Meanwhile - I don't know why - one of my two gluster volume seems UP (green) on the GUI. So, now only one is down.
I need help. What can I do?
Thanks in advance,
Regards,
Tibor
----- 2018. máj.. 23., 21:09, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I've updated again to the latest version, but there are no changes. All of bricks on my first node are down in the GUI (in console are ok) An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and all bricks, but "Space used" column is zero for all hosts/bricks. Can I force remove and re-add my host to cluster awhile it is a gluster member? Is it safe ? What can I do?
I haven't update other hosts while gluster not working fine, or the GUI does not detect . So my other hosts is remained 4.1 yet :(
Thanks in advance,
Regards
Tibor
----- 2018. máj.. 23., 14:45, Denis Chapligin <dchaplyg@redhat.com> írta:
Hello!
On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Is there any changes with this bug?
Still I haven't finish my upgrade process that i've started on 9th may:(
Please help me if you can.
Looks like all required patches are already merged, so could you please to update your engine again to the latest night build?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/MRAAPZSRIXLAJZBV6TRDXXK7R2ISPSDK/

Dear Sahina, Yes, exactly. I can check that check box, but I don't know how is safe that. Is it safe? I want to upgrade all of my host. If it will done, then the monitoring will work perfectly? Thanks. R. Tibor ----- 2018. máj.. 28., 10:09, Sahina Bose <sabose@redhat.com> írta:
On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Somebody could answer to my question please? It is very important for me, I could no finish my upgrade process (from 4.1 to 4.2) since 9th May!
Can you explain how the upgrade process is blocked due to the monitoring? If it's because you cannot move the host to maintenance, can you try with the option "Ignore quorum checks" enabled?
Meanwhile - I don't know why - one of my two gluster volume seems UP (green) on the GUI. So, now only one is down.
I need help. What can I do?
Thanks in advance,
Regards,
Tibor
----- 2018. máj.. 23., 21:09, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta:
Hi,
I've updated again to the latest version, but there are no changes. All of bricks on my first node are down in the GUI (in console are ok) An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and all bricks, but "Space used" column is zero for all hosts/bricks. Can I force remove and re-add my host to cluster awhile it is a gluster member? Is it safe ? What can I do?
I haven't update other hosts while gluster not working fine, or the GUI does not detect . So my other hosts is remained 4.1 yet :(
Thanks in advance,
Regards
Tibor
----- 2018. máj.. 23., 14:45, Denis Chapligin < [ mailto:dchaplyg@redhat.com | dchaplyg@redhat.com ] > írta:
Hello!
On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Is there any changes with this bug?
Still I haven't finish my upgrade process that i've started on 9th may:(
Please help me if you can.
Looks like all required patches are already merged, so could you please to update your engine again to the latest night build?
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ]
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ] Privacy Statement: [ https://www.ovirt.org/site/privacy-policy/ | https://www.ovirt.org/site/privacy-policy/ ] oVirt Code of Conduct: [ https://www.ovirt.org/community/about/community-guidelines/ | https://www.ovirt.org/community/about/community-guidelines/ ] List Archives: [ https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZ... | https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZ... ]

On Mon, May 28, 2018 at 4:47 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Dear Sahina,
Yes, exactly. I can check that check box, but I don't know how is safe that. Is it safe?
It is safe - if you can ensure that only one host is put into maintenance at a time.
I want to upgrade all of my host. If it will done, then the monitoring will work perfectly?
If it does not please provide engine.log again once you've upgraded all the hosts.
Thanks. R.
Tibor
----- 2018. máj.. 28., 10:09, Sahina Bose <sabose@redhat.com> írta:
On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Somebody could answer to my question please? It is very important for me, I could no finish my upgrade process (from 4.1 to 4.2) since 9th May!
Can you explain how the upgrade process is blocked due to the monitoring? If it's because you cannot move the host to maintenance, can you try with the option "Ignore quorum checks" enabled?
Meanwhile - I don't know why - one of my two gluster volume seems UP (green) on the GUI. So, now only one is down.
I need help. What can I do?
Thanks in advance,
Regards,
Tibor
----- 2018. máj.. 23., 21:09, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I've updated again to the latest version, but there are no changes. All of bricks on my first node are down in the GUI (in console are ok) An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and all bricks, but "Space used" column is zero for all hosts/bricks. Can I force remove and re-add my host to cluster awhile it is a gluster member? Is it safe ? What can I do?
I haven't update other hosts while gluster not working fine, or the GUI does not detect . So my other hosts is remained 4.1 yet :(
Thanks in advance,
Regards
Tibor
----- 2018. máj.. 23., 14:45, Denis Chapligin <dchaplyg@redhat.com> írta:
Hello!
On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Is there any changes with this bug?
Still I haven't finish my upgrade process that i've started on 9th may:(
Please help me if you can.
Looks like all required patches are already merged, so could you please to update your engine again to the latest night build?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/MRAAPZSRIXLAJZBV6TRDXXK7R2ISPSDK/

Hi, Ok I will try it. In this case, is it possible to remove and re-add a host that member of HA gluster ? This is an another task, but I need to separate my gluster network from my ovirtmgmt network. What is the recommended way for do this? It is not important now, but I need to do in future. I will attach my engine.log after upgrade my host. Thanks, Regards. Tibor ----- 2018. máj.. 28., 14:44, Sahina Bose <sabose@redhat.com> írta:
On Mon, May 28, 2018 at 4:47 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Dear Sahina,
Yes, exactly. I can check that check box, but I don't know how is safe that. Is it safe?
It is safe - if you can ensure that only one host is put into maintenance at a time.
I want to upgrade all of my host. If it will done, then the monitoring will work perfectly?
If it does not please provide engine.log again once you've upgraded all the hosts.
Thanks. R.
Tibor
----- 2018. máj.. 28., 10:09, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
Somebody could answer to my question please? It is very important for me, I could no finish my upgrade process (from 4.1 to 4.2) since 9th May!
Can you explain how the upgrade process is blocked due to the monitoring? If it's because you cannot move the host to maintenance, can you try with the option "Ignore quorum checks" enabled?
Meanwhile - I don't know why - one of my two gluster volume seems UP (green) on the GUI. So, now only one is down.
I need help. What can I do?
Thanks in advance,
Regards,
Tibor
----- 2018. máj.. 23., 21:09, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta:
Hi,
I've updated again to the latest version, but there are no changes. All of bricks on my first node are down in the GUI (in console are ok) An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and all bricks, but "Space used" column is zero for all hosts/bricks. Can I force remove and re-add my host to cluster awhile it is a gluster member? Is it safe ? What can I do?
I haven't update other hosts while gluster not working fine, or the GUI does not detect . So my other hosts is remained 4.1 yet :(
Thanks in advance,
Regards
Tibor
----- 2018. máj.. 23., 14:45, Denis Chapligin < [ mailto:dchaplyg@redhat.com | dchaplyg@redhat.com ] > írta:
Hello!
On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
> Is there any changes with this bug?
> Still I haven't finish my upgrade process that i've started on 9th may:(
> Please help me if you can.
Looks like all required patches are already merged, so could you please to update your engine again to the latest night build?
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ]
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ] Privacy Statement: [ https://www.ovirt.org/site/privacy-policy/ | https://www.ovirt.org/site/privacy-policy/ ] oVirt Code of Conduct: [ https://www.ovirt.org/community/about/community-guidelines/ | https://www.ovirt.org/community/about/community-guidelines/ ] List Archives: [ https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZ... | https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZ... ]

Hi, I've successfully upgraded my hosts and I could raise the cluster level to 4.2. Everything seems fine, but the monitoring problem does not resolved. My bricks on first node are shown down (red) , but the glusterfs working fine (I verified in terminal). I've attached my engine.log. Thanks in advance, R, Tibor ----- 2018. máj.. 28., 14:59, Demeter Tibor <tdemeter@itsmart.hu> írta: Hi, Ok I will try it. In this case, is it possible to remove and re-add a host that member of HA gluster ? This is an another task, but I need to separate my gluster network from my ovirtmgmt network. What is the recommended way for do this? It is not important now, but I need to do in future. I will attach my engine.log after upgrade my host. Thanks, Regards. Tibor ----- 2018. máj.. 28., 14:44, Sahina Bose <sabose@redhat.com> írta: BQ_BEGIN On Mon, May 28, 2018 at 4:47 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote: BQ_BEGIN Dear Sahina, Yes, exactly. I can check that check box, but I don't know how is safe that. Is it safe? It is safe - if you can ensure that only one host is put into maintenance at a time. BQ_BEGIN I want to upgrade all of my host. If it will done, then the monitoring will work perfectly? BQ_END If it does not please provide engine.log again once you've upgraded all the hosts. BQ_BEGIN Thanks. R. Tibor ----- 2018. máj.. 28., 10:09, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta: BQ_BEGIN On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote: BQ_BEGIN Hi, Somebody could answer to my question please? It is very important for me, I could no finish my upgrade process (from 4.1 to 4.2) since 9th May! BQ_END Can you explain how the upgrade process is blocked due to the monitoring? If it's because you cannot move the host to maintenance, can you try with the option "Ignore quorum checks" enabled? BQ_BEGIN Meanwhile - I don't know why - one of my two gluster volume seems UP (green) on the GUI. So, now only one is down. I need help. What can I do? Thanks in advance, Regards, Tibor ----- 2018. máj.. 23., 21:09, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta: BQ_BEGIN Hi, I've updated again to the latest version, but there are no changes. All of bricks on my first node are down in the GUI (in console are ok) An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and all bricks, but "Space used" column is zero for all hosts/bricks. Can I force remove and re-add my host to cluster awhile it is a gluster member? Is it safe ? What can I do? I haven't update other hosts while gluster not working fine, or the GUI does not detect . So my other hosts is remained 4.1 yet :( Thanks in advance, Regards Tibor ----- 2018. máj.. 23., 14:45, Denis Chapligin < [ mailto:dchaplyg@redhat.com | dchaplyg@redhat.com ] > írta: BQ_BEGIN Hello! On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote: BQ_BEGIN Is there any changes with this bug? Still I haven't finish my upgrade process that i've started on 9th may:( Please help me if you can. BQ_END Looks like all required patches are already merged, so could you please to update your engine again to the latest night build? BQ_END _______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ] BQ_END _______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ] Privacy Statement: [ https://www.ovirt.org/site/privacy-policy/ | https://www.ovirt.org/site/privacy-policy/ ] oVirt Code of Conduct: [ https://www.ovirt.org/community/about/community-guidelines/ | https://www.ovirt.org/community/about/community-guidelines/ ] List Archives: [ https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZ... | https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZ... ] BQ_END BQ_END BQ_END BQ_END _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWA2I6AFZPO56Z... BQ_END

I had the same problem when I upgraded to 4.2. I found that if I went to the brick in the UI and selected it, there was a "start" button in the upper-right of the gui. clicking that resolved this problem a few minutes later. I had to repeat for each volume that showed a brick down for which that brick was not actually down. --Jim On Tue, May 29, 2018 at 6:34 AM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
I've successfully upgraded my hosts and I could raise the cluster level to 4.2. Everything seems fine, but the monitoring problem does not resolved. My bricks on first node are shown down (red) , but the glusterfs working fine (I verified in terminal).
I've attached my engine.log.
Thanks in advance,
R, Tibor
----- 2018. máj.. 28., 14:59, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi, Ok I will try it.
In this case, is it possible to remove and re-add a host that member of HA gluster ? This is an another task, but I need to separate my gluster network from my ovirtmgmt network. What is the recommended way for do this?
It is not important now, but I need to do in future.
I will attach my engine.log after upgrade my host.
Thanks, Regards.
Tibor
----- 2018. máj.. 28., 14:44, Sahina Bose <sabose@redhat.com> írta:
On Mon, May 28, 2018 at 4:47 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Dear Sahina,
Yes, exactly. I can check that check box, but I don't know how is safe that. Is it safe?
It is safe - if you can ensure that only one host is put into maintenance at a time.
I want to upgrade all of my host. If it will done, then the monitoring will work perfectly?
If it does not please provide engine.log again once you've upgraded all the hosts.
Thanks. R.
Tibor
----- 2018. máj.. 28., 10:09, Sahina Bose <sabose@redhat.com> írta:
On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Hi,
Somebody could answer to my question please? It is very important for me, I could no finish my upgrade process (from 4.1 to 4.2) since 9th May!
Can you explain how the upgrade process is blocked due to the monitoring? If it's because you cannot move the host to maintenance, can you try with the option "Ignore quorum checks" enabled?
Meanwhile - I don't know why - one of my two gluster volume seems UP (green) on the GUI. So, now only one is down.
I need help. What can I do?
Thanks in advance,
Regards,
Tibor
----- 2018. máj.. 23., 21:09, Demeter Tibor <tdemeter@itsmart.hu> írta:
Hi,
I've updated again to the latest version, but there are no changes. All of bricks on my first node are down in the GUI (in console are ok) An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and all bricks, but "Space used" column is zero for all hosts/bricks. Can I force remove and re-add my host to cluster awhile it is a gluster member? Is it safe ? What can I do?
I haven't update other hosts while gluster not working fine, or the GUI does not detect . So my other hosts is remained 4.1 yet :(
Thanks in advance,
Regards
Tibor
----- 2018. máj.. 23., 14:45, Denis Chapligin <dchaplyg@redhat.com> írta:
Hello!
On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor <tdemeter@itsmart.hu> wrote:
Is there any changes with this bug?
Still I haven't finish my upgrade process that i've started on 9th may:(
Please help me if you can.
Looks like all required patches are already merged, so could you please to update your engine again to the latest night build?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/MRAAPZSRIXLAJZBV6TRDXXK7R2ISPSDK/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/OWA2I6AFZPO56Z2N6D25HUHLW6CGOUWL/
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/DP6DDTNTQUUQZN6BPOTPHUY4XD7A7RQB/

Dear Jim, Thank you for your help, now it's working again!!! :) Have a nice day! Regards, Tibor ----- 2018. máj.. 29., 23:57, Jim Kusznir <jim@palousetech.com> írta:
I had the same problem when I upgraded to 4.2. I found that if I went to the brick in the UI and selected it, there was a "start" button in the upper-right of the gui. clicking that resolved this problem a few minutes later. I had to repeat for each volume that showed a brick down for which that brick was not actually down.
--Jim
On Tue, May 29, 2018 at 6:34 AM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Hi,
I've successfully upgraded my hosts and I could raise the cluster level to 4.2. Everything seems fine, but the monitoring problem does not resolved. My bricks on first node are shown down (red) , but the glusterfs working fine (I verified in terminal).
I've attached my engine.log.
Thanks in advance,
R, Tibor
----- 2018. máj.. 28., 14:59, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > írta:
Hi, Ok I will try it.
In this case, is it possible to remove and re-add a host that member of HA gluster ? This is an another task, but I need to separate my gluster network from my ovirtmgmt network. What is the recommended way for do this?
It is not important now, but I need to do in future.
I will attach my engine.log after upgrade my host.
Thanks, Regards.
Tibor
----- 2018. máj.. 28., 14:44, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
On Mon, May 28, 2018 at 4:47 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
Dear Sahina,
Yes, exactly. I can check that check box, but I don't know how is safe that. Is it safe?
It is safe - if you can ensure that only one host is put into maintenance at a time.
I want to upgrade all of my host. If it will done, then the monitoring will work perfectly?
If it does not please provide engine.log again once you've upgraded all the hosts.
Thanks. R.
Tibor
----- 2018. máj.. 28., 10:09, Sahina Bose < [ mailto:sabose@redhat.com | sabose@redhat.com ] > írta:
On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | tdemeter@itsmart.hu ] > wrote:
> Hi,
> Somebody could answer to my question please? > It is very important for me, I could no finish my upgrade process (from 4.1 to > 4.2) since 9th May!
Can you explain how the upgrade process is blocked due to the monitoring? If it's because you cannot move the host to maintenance, can you try with the option "Ignore quorum checks" enabled?
> Meanwhile - I don't know why - one of my two gluster volume seems UP (green) on > the GUI. So, now only one is down.
> I need help. What can I do?
> Thanks in advance,
> Regards,
> Tibor
> ----- 2018. máj.. 23., 21:09, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | > tdemeter@itsmart.hu ] > írta:
>> Hi,
>> I've updated again to the latest version, but there are no changes. All of >> bricks on my first node are down in the GUI (in console are ok) >> An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and >> all bricks, but "Space used" column is zero for all hosts/bricks. >> Can I force remove and re-add my host to cluster awhile it is a gluster member? >> Is it safe ? >> What can I do?
>> I haven't update other hosts while gluster not working fine, or the GUI does not >> detect . So my other hosts is remained 4.1 yet :(
>> Thanks in advance,
>> Regards
>> Tibor
>> ----- 2018. máj.. 23., 14:45, Denis Chapligin < [ mailto:dchaplyg@redhat.com | >> dchaplyg@redhat.com ] > írta:
>>> Hello!
>>> On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu | >>> tdemeter@itsmart.hu ] > wrote:
>>>> Is there any changes with this bug?
>>>> Still I haven't finish my upgrade process that i've started on 9th may:(
>>>> Please help me if you can.
>>> Looks like all required patches are already merged, so could you please to >>> update your engine again to the latest night build?
>> _______________________________________________ >> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] >> To unsubscribe send an email to [ mailto:users-leave@ovirt.org | >> users-leave@ovirt.org ]
> _______________________________________________ > Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] > To unsubscribe send an email to [ mailto:users-leave@ovirt.org | > users-leave@ovirt.org ] > Privacy Statement: [ https://www.ovirt.org/site/privacy-policy/ | > https://www.ovirt.org/site/privacy-policy/ ] > oVirt Code of Conduct: [ > https://www.ovirt.org/community/about/community-guidelines/ | > https://www.ovirt.org/community/about/community-guidelines/ ] > List Archives: [ > https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZ... > | > https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZ... > ]
_______________________________________________ Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ] Privacy Statement: [ https://www.ovirt.org/site/privacy-policy/ | https://www.ovirt.org/site/privacy-policy/ ] oVirt Code of Conduct: [ https://www.ovirt.org/community/about/community-guidelines/ | https://www.ovirt.org/community/about/community-guidelines/ ] List Archives: [ https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWA2I6AFZPO56Z... | https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWA2I6AFZPO56Z... ]
Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ] To unsubscribe send an email to [ mailto:users-leave@ovirt.org | users-leave@ovirt.org ] Privacy Statement: [ https://www.ovirt.org/site/privacy-policy/ | https://www.ovirt.org/site/privacy-policy/ ] oVirt Code of Conduct: [ https://www.ovirt.org/community/about/community-guidelines/ | https://www.ovirt.org/community/about/community-guidelines/ ] List Archives: [ https://lists.ovirt.org/archives/list/users@ovirt.org/message/DP6DDTNTQUUQZN... | https://lists.ovirt.org/archives/list/users@ovirt.org/message/DP6DDTNTQUUQZN... ]
participants (4)
-
Demeter Tibor
-
Denis Chaplygin
-
Jim Kusznir
-
Sahina Bose