Hi,
4.2.4-0.0.master.20180515183442.git00e1340.el7.centos
Firstly, I did a yum update "ovirt-*-setup*"
second, I have ran engine-setup to upgrade.
I didn't remove the old repos, just installed the nightly repo.
Thank you again,
Regards,
Tibor
----- 2018. máj.. 17., 15:02, Sahina Bose <sabose(a)redhat.com> írta:
It doesn't look like the patch was applied. Still see the same
error in
engine.log
"Error while refreshing brick statuses for volume 'volume1' of cluster
'C6220':
null"\
Did you use engine-setup to upgrade? What's the version of
ovirt-engine
currently installed?
On Thu, May 17, 2018 at 5:10 PM, Demeter Tibor < [
mailto:tdemeter@itsmart.hu |
tdemeter(a)itsmart.hu ] > wrote:
> Hi,
> sure,
> Thank you for your time!
> R
> Tibor
> ----- 2018. máj.. 17., 12:19, Sahina Bose < [
mailto:sabose@redhat.com |
> sabose(a)redhat.com ] > írta:
>> [+users]
>> Can you provide the engine.log to see why the monitoring is
not working here.
>> thanks!
>> On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor < [
mailto:tdemeter@itsmart.hu |
>> tdemeter(a)itsmart.hu ] > wrote:
>>
> Hi,
>>> Meanwhile, I did the upgrade engine, but the gluster
state is same on my first
>>> node.
>>> I've attached some screenshot of my problem.
>>> Thanks
>>> Tibor
>>> ----- 2018. máj.. 16., 10:16, Demeter Tibor < [
mailto:tdemeter@itsmart.hu |
>>> tdemeter(a)itsmart.hu ] > írta Hi,
>>>> If 4.3.4 will release, i just have to remove the
nightly repo and update to
>>>> stable?
>>>> I'm sorry for my terrible English, I try to
explain what was my problem with
>>>> update.
>>>> I'm upgraded from 4.1.8.
>>>> Maybe it need to update, because I had a lot of
question under upgrade and I was
>>>> not sure in all of necessary steps. For example, If I need to installing
the
>>>> new, 4.2 repo on the hosts, then need to remove the old repo from that?
>>>> Why I need to do a" yum update -y" on hosts, meanwhile there is
an "Updatehost"
>>>> menu in the GUI? So, maybe it outdated.
>>>> Since upgrade hosted engine, and the first node, I have problems with
gluster.
>>>> It seems to working fine if you check it from console "gluster
volume status,
>>>> etc" but not on the Gui, because now it yellow, and the brick reds
in the first
>>>> node.
>>>> Previously I did a mistake with glusterfs, my gluster
config was wrong. I have
>>>> corrected them, but it did not helped to me,gluster bricks are reds on my
first
>>>> node yet....
>>>> Now I try to upgrade to nightly, but I'm affraid,
because it a living,
>>>> productive system, and I don't have downtime. I hope it will help me.
>>>> Thanks for all,
>>>> Regards,
>>>> Tibor Demeter
>>>> ----- 2018. máj.. 16., 9:58, Sahina Bose < [
mailto:sabose@redhat.com |
>>>> sabose(a)redhat.com ] > írta:
>>>>> On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor
< [ mailto:tdemeter@itsmart.hu |
>>>>> tdemeter(a)itsmart.hu ] > wrote:
>>>>>
> Hi,
>>>>>> is it a different, unstable repo? I have a
productive cluster, how is safe that?
>>>>>> I don't have any experience with nightly build. How can I use
this? It have to
>>>>>> install to the engine VM or all of my hosts?
>>>>>> Thanks in advance for help me..
>>>>> Only on the engine VM.
>>>>> Regarding stability - it passes CI so relatively
stable, beyond that there are
>>>>> no guarantees.
>>>>> What's the specific problem you're facing
with update? Can you elaborate?
>>>>>> Regards,
>>>
>>> Tibor
>>>>>> ----- 2018. máj.. 15., 9:58, Demeter Tibor
< [ mailto:tdemeter@itsmart.hu |
>>>>>> tdemeter(a)itsmart.hu ] > írta:
>>>>>>
> Hi,
>>>>>>> Could you explain how can I use this
patch?
>>>>>>>> R,
>>>>
>>> Tibor
>>>>>>> ----- 2018. máj.. 14., 11:18, Demeter
Tibor < [ mailto:tdemeter@itsmart.hu |
>>>>>>> tdemeter(a)itsmart.hu ] > írta:
>>>>>>>
> Hi,
>>>>>>>> Sorry for my question, but can you
tell me please how can I use this patch?
>>>>>>>>> Thanks,
>>
>>>>>> Regards,
>>>>>
>>> Tibor
>>>>>>>>> ----- 2018. máj.. 14., 10:47, Sahina Bose <
[ mailto:sabose@redhat.com |
>>>>>>>>> sabose(a)redhat.com ] > írta:
>>>>>>>>> On Sat, May 12, 2018 at 1:14 PM,
Demeter Tibor < [ mailto:tdemeter@itsmart.hu |
>>>>>>>>> tdemeter(a)itsmart.hu ] > wrote:
>>>>>>>>>
> Hi,
>>>>>>>>>> Could someone help me please
? I can't finish my upgrade process.
>>>>>>>>> [
https://gerrit.ovirt.org/91164
|
https://gerrit.ovirt.org/91164 ] should fix
>>>>>>>>> the error you're facing.
>>>>>>>>> Can you elaborate why this is
affecting the upgrade process?
>>>>>>>
>>> Thanks
>>>>>>>>>>> R
>>>>>>>
>>> Tibor
>>>>>>>>>> ----- 2018. máj.. 10., 12:51,
Demeter Tibor < [ mailto:tdemeter@itsmart.hu |
>>>>>>>>>> tdemeter(a)itsmart.hu ] > írta:
>>>>>>>>>>
> Hi,
>>>>>>>>>>> I've attached the
vdsm and supervdsm logs. But I don't have engine.log here,
>>>>>>>>>>> because that is on hosted engine vm. Should I
send that ?
>>>>>>>>>>> Thank you
>>>>>
>>>>>> Regards,
>>>>>>>>
>>> Tibor
>>>>>>>>>>>> ----- 2018. máj.. 10., 12:30,
Sahina Bose < [ mailto:sabose@redhat.com |
>>>>>>>>>>>> sabose(a)redhat.com ] > írta:
>>>>>>>>>>>> There's a bug
here. Can you log one attaching this engine.log and also vdsm.log
>>>>>>>>>>>> & supervdsm.log from n3.itsmart.cloud
>>>>>>>>>>>> On Thu, May 10, 2018
at 3:35 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu |
>>>>>>>>>>>> tdemeter(a)itsmart.hu ] > wrote:
>>>>>>>>>>>>
> Hi,
>>>>>>>>>>>>> I found this:
>>>>>>>>>>>>> 2018-05-10
03:24:19,096+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
FINISH,
>>>>>>>>>>>>>
GetGlusterVolumeAdvancedDetailsVDSCommand, return:
>>>>>>>>>>>>>
org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e,
>>>>>>>>>>>>> log id: 347435ae
>>>>>>>>>>>>> 2018-05-10 03:24:19,097+02 ERROR
>>>>>>>>>>>>>
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7)
>>>>>>>>>>>>> [43f4eaec] Error while refreshing
brick statuses for volume 'volume2' of
>>>>>>>>>>>>> cluster 'C6220': null
>>>>>>>>>>>>> 2018-05-10 03:24:19,097+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8)
>>>>>>>>>>>>> [7715ceda] Failed to acquire lock and
wait lock
>>>>>>>>>>>>>
'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]',
>>>>>>>>>>>>> sharedLocks=''}'
>>>>>>>>>>>>> 2018-05-10 03:24:19,104+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
START,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}),
>>>>>>>>>>>>> log id: 6908121d
>>>>>>>>>>>>> 2018-05-10 03:24:19,106+02 ERROR
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
Command
>>>>>>>>>>>>>
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>>>>>>>>>>>>> execution failed: null
>>>>>>>>>>>>> 2018-05-10 03:24:19,106+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
FINISH,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d
>>>>>>>>>>>>> 2018-05-10 03:24:19,107+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
START,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}),
>>>>>>>>>>>>> log id: 735c6a5f
>>>>>>>>>>>>> 2018-05-10 03:24:19,109+02 ERROR
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
Command
>>>>>>>>>>>>>
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
>>>>>>>>>>>>> execution failed: null
>>>>>>>>>>>>> 2018-05-10 03:24:19,109+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
FINISH,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f
>>>>>>>>>>>>> 2018-05-10 03:24:19,110+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
START,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
>>>>>>>>>>>>> log id: 6f9e9f58
>>>>>>>>>>>>> 2018-05-10 03:24:19,112+02 ERROR
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
Command
>>>>>>>>>>>>>
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})'
>>>>>>>>>>>>> execution failed: null
>>>>>>>>>>>>> 2018-05-10 03:24:19,112+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
FINISH,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58
>>>>>>>>>>>>> 2018-05-10 03:24:19,113+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
START,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}),
>>>>>>>>>>>>> log id: 2ee46967
>>>>>>>>>>>>> 2018-05-10 03:24:19,115+02 ERROR
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
Command
>>>>>>>>>>>>>
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})'
>>>>>>>>>>>>> execution failed: null
>>>>>>>>>>>>> 2018-05-10 03:24:19,116+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
FINISH,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967
>>>>>>>>>>>>> 2018-05-10 03:24:19,117+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
START,
>>>>>>>>>>>>>
GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud,
>>>>>>>>>>>>>
GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57',
>>>>>>>>>>>>> volumeName='volume1'}), log
id: 7550e5c
>>>>>>>>>>>>> 2018-05-10 03:24:20,748+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler7) [43f4eaec]
FINISH,
>>>>>>>>>>>>>
GetGlusterVolumeAdvancedDetailsVDSCommand, return:
>>>>>>>>>>>>>
org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@4a46066f,
>>>>>>>>>>>>> log id: 7550e5c
>>>>>>>>>>>>> 2018-05-10 03:24:20,749+02 ERROR
>>>>>>>>>>>>>
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7)
>>>>>>>>>>>>> [43f4eaec] Error while refreshing
brick statuses for volume 'volume1' of
>>>>>>>>>>>>> cluster 'C6220': null
>>>>>>>>>>>>> 2018-05-10 03:24:20,750+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda]
START,
>>>>>>>>>>>>> GlusterServersListVDSCommand(HostName
= n2.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
>>>>>>>>>>>>> log id: 120cc68d
>>>>>>>>>>>>> 2018-05-10 03:24:20,930+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda]
FINISH, GlusterServersListVDSCommand,
>>>>>>>>>>>>> return: [ [
http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] ,
>>>>>>>>>>>>> n1.cloudata.local:CONNECTED,
10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log
>>>>>>>>>>>>> id: 120cc68d
>>>>>>>>>>>>> 2018-05-10 03:24:20,949+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda]
START,
>>>>>>>>>>>>> GlusterVolumesListVDSCommand(HostName
= n2.itsmart.cloud,
>>>>>>>>>>>>>
GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
>>>>>>>>>>>>> log id: 118aa264
>>>>>>>>>>>>> 2018-05-10 03:24:21,048+02 WARN
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda]
Could not associate brick
>>>>>>>>>>>>>
'10.104.0.1:/gluster/brick/brick1' of volume
>>>>>>>>>>>>>
'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster
>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>> 2018-05-10 03:24:21,055+02 WARN
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda]
Could not associate brick
>>>>>>>>>>>>>
'10.104.0.1:/gluster/brick/brick2' of volume
>>>>>>>>>>>>>
'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster
>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>> 2018-05-10 03:24:21,061+02 WARN
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda]
Could not associate brick
>>>>>>>>>>>>>
'10.104.0.1:/gluster/brick/brick3' of volume
>>>>>>>>>>>>>
'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster
>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>> 2018-05-10 03:24:21,067+02 WARN
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda]
Could not associate brick
>>>>>>>>>>>>>
'10.104.0.1:/gluster2/brick/brick1' of volume
>>>>>>>>>>>>>
'68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster
>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>> 2018-05-10 03:24:21,074+02 WARN
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda]
Could not associate brick
>>>>>>>>>>>>>
'10.104.0.1:/gluster2/brick/brick2' of volume
>>>>>>>>>>>>>
'68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster
>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>> 2018-05-10 03:24:21,080+02 WARN
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda]
Could not associate brick
>>>>>>>>>>>>>
'10.104.0.1:/gluster2/brick/brick3' of volume
>>>>>>>>>>>>>
'68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster
>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>> 2018-05-10 03:24:21,081+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler8) [7715ceda]
FINISH, GlusterVolumesListVDSCommand,
>>>>>>>>>>>>> return:
>>>>>>>>>>>>>
{68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d,
>>>>>>>>>>>>>
e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.g
>>>>>>>>>>>>> luster.GlusterVolumeEntity@f88c521b},
log id: 118aa264
>>>>>>>>>>>>> 2018-05-10
11:59:26,047+02 ERROR
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
Command
>>>>>>>>>>>>>
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
>>>>>>>>>>>>> execution failed: null
>>>>>>>>>>>>> 2018-05-10 11:59:26,047+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
FINISH,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 14a71ef0
>>>>>>>>>>>>> 2018-05-10 11:59:26,048+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
START,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}),
>>>>>>>>>>>>> log id: 28d9e255
>>>>>>>>>>>>> 2018-05-10 11:59:26,051+02 ERROR
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
Command
>>>>>>>>>>>>>
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>>>>>>>>>>>>> execution failed: null
>>>>>>>>>>>>> 2018-05-10 11:59:26,051+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
FINISH,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 28d9e255
>>>>>>>>>>>>> 2018-05-10 11:59:26,052+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
START,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
>>>>>>>>>>>>> log id: 4a7b280e
>>>>>>>>>>>>> 2018-05-10 11:59:26,054+02 ERROR
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
Command
>>>>>>>>>>>>>
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})'
>>>>>>>>>>>>> execution failed: null
>>>>>>>>>>>>> 2018-05-10 11:59:26,054+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
FINISH,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 4a7b280e
>>>>>>>>>>>>> 2018-05-10 11:59:26,055+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
START,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}),
>>>>>>>>>>>>> log id: 18adc534
>>>>>>>>>>>>> 2018-05-10 11:59:26,057+02 ERROR
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
Command
>>>>>>>>>>>>>
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud,
>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})'
>>>>>>>>>>>>> execution failed: null
>>>>>>>>>>>>> 2018-05-10 11:59:26,057+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
FINISH,
>>>>>>>>>>>>>
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 18adc534
>>>>>>>>>>>>> 2018-05-10 11:59:26,058+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
START,
>>>>>>>>>>>>>
GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n3.itsmart.cloud,
>>>>>>>>>>>>>
GlusterVolumeAdvancedDetailsVDSParameters:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec',
>>>>>>>>>>>>> volumeName='volume1'}), log
id: 3451084f
>>>>>>>>>>>>> 2018-05-10 11:59:28,050+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3)
>>>>>>>>>>>>> [2eb1c389] Failed to acquire lock and
wait lock
>>>>>>>>>>>>>
'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]',
>>>>>>>>>>>>> sharedLocks=''}'
>>>>>>>>>>>>> 2018-05-10 11:59:28,060+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3)
>>>>>>>>>>>>> [2eb1c389] Failed to acquire lock and
wait lock
>>>>>>>>>>>>>
'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]',
>>>>>>>>>>>>> sharedLocks=''}'
>>>>>>>>>>>>> 2018-05-10 11:59:28,062+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3)
>>>>>>>>>>>>> [2eb1c389] Failed to acquire lock and
wait lock
>>>>>>>>>>>>>
'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]',
>>>>>>>>>>>>> sharedLocks=''}'
>>>>>>>>>>>>> 2018-05-10 11:59:31,054+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3)
>>>>>>>>>>>>> [2eb1c389] Failed to acquire lock and
wait lock
>>>>>>>>>>>>>
'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]',
>>>>>>>>>>>>> sharedLocks=''}'
>>>>>>>>>>>>> 2018-05-10 11:59:31,054+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3)
>>>>>>>>>>>>> [2eb1c389] Failed to acquire lock and
wait lock
>>>>>>>>>>>>>
'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]',
>>>>>>>>>>>>> sharedLocks=''}'
>>>>>>>>>>>>> 2018-05-10 11:59:31,062+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3)
>>>>>>>>>>>>> [2eb1c389] Failed to acquire lock and
wait lock
>>>>>>>>>>>>>
'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]',
>>>>>>>>>>>>> sharedLocks=''}'
>>>>>>>>>>>>> 2018-05-10 11:59:31,064+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler3)
>>>>>>>>>>>>> [2eb1c389] Failed to acquire lock and
wait lock
>>>>>>>>>>>>>
'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-000000000339=GLUSTER]',
>>>>>>>>>>>>> sharedLocks=''}'
>>>>>>>>>>>>> 2018-05-10 11:59:31,465+02 INFO
>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>>>>>>>>>>>>> (DefaultQuartzScheduler4) [400fa486]
FINISH,
>>>>>>>>>>>>>
GetGlusterVolumeAdvancedDetailsVDSCommand, return:
>>>>>>>>>>>>>
org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@3f1b7f43,
>>>>>>>>>>>>> log id: 3451084f
>>>>>>>>>>>>> 2018-05-10 11:59:31,466+02 ERROR
>>>>>>>>>>>>>
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler4)
>>>>>>>>>>>>> [400fa486] Error while refreshing
brick statuses for volume 'volume1' of
>>>>>>>>>>>>> cluster 'C6220': null
>>>>>>>>>>>>>> R
>>>>>>>>>>
>>>
Tibor
>>>>>>>>>>>>> ----- 2018. máj..
10., 11:43, Sahina Bose < [ mailto:sabose@redhat.com |
>>>>>>>>>>>>> sabose(a)redhat.com ] > írta:
>>>>>>>>>>>>>> This
doesn't affect the monitoring of state.
>>>>>>>>>>>>>> Any errors in vdsm.log?
>>>>>>>>>>>>>> Or errors in engine.log of the
form "Error while refreshing brick statuses for
>>>>>>>>>>>>>> volume"
>>>>>>>>>>>>>> On Thu, May
10, 2018 at 2:33 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu |
>>>>>>>>>>>>>> tdemeter(a)itsmart.hu ] > wrote:
>>>>>>>>>>>>>>
> Hi,
>>>>>>>>>>>>>>> Thank you
for your fast reply :)
>>>>>>>>>>>>>>>
2018-05-10 11:01:51,574+02 INFO
>>>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
>>>>>>>>>>>>>>> (DefaultQuartzScheduler6)
[7f01fc2d] START,
>>>>>>>>>>>>>>>
GlusterServersListVDSCommand(HostName = n2.itsmart.cloud,
>>>>>>>>>>>>>>>
VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
>>>>>>>>>>>>>>> log id: 39adbbb8
>>>>>>>>>>>>>>> 2018-05-10 11:01:51,768+02
INFO
>>>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
>>>>>>>>>>>>>>> (DefaultQuartzScheduler6)
[7f01fc2d] FINISH, GlusterServersListVDSCommand,
>>>>>>>>>>>>>>> return: [ [
http://10.101.0.2/24:CONNECTED | 10.101.0.2/24:CONNECTED ] ,
>>>>>>>>>>>>>>> n1.cloudata.local:CONNECTED,
10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log
>>>>>>>>>>>>>>> id: 39adbbb8
>>>>>>>>>>>>>>> 2018-05-10 11:01:51,788+02
INFO
>>>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>>>>>>>>>>>>> (DefaultQuartzScheduler6)
[7f01fc2d] START,
>>>>>>>>>>>>>>>
GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud,
>>>>>>>>>>>>>>>
GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
>>>>>>>>>>>>>>> log id: 738a7261
>>>>>>>>>>>>>>> 2018-05-10 11:01:51,892+02
WARN
>>>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>>>> (DefaultQuartzScheduler6)
[7f01fc2d] Could not associate brick
>>>>>>>>>>>>>>>
'10.104.0.1:/gluster/brick/brick1' of volume
>>>>>>>>>>>>>>>
'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster
>>>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>>>> 2018-05-10 11:01:51,898+02
WARN
>>>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>>>> (DefaultQuartzScheduler6)
[7f01fc2d] Could not associate brick
>>>>>>>>>>>>>>>
'10.104.0.1:/gluster/brick/brick2' of volume
>>>>>>>>>>>>>>>
'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster
>>>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>>>> 2018-05-10 11:01:51,905+02
WARN
>>>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>>>> (DefaultQuartzScheduler6)
[7f01fc2d] Could not associate brick
>>>>>>>>>>>>>>>
'10.104.0.1:/gluster/brick/brick3' of volume
>>>>>>>>>>>>>>>
'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster
>>>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>>>> 2018-05-10 11:01:51,911+02
WARN
>>>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>>>> (DefaultQuartzScheduler6)
[7f01fc2d] Could not associate brick
>>>>>>>>>>>>>>>
'10.104.0.1:/gluster2/brick/brick1' of volume
>>>>>>>>>>>>>>>
'68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster
>>>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>>>> 2018-05-10 11:01:51,917+02
WARN
>>>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>>>> (DefaultQuartzScheduler6)
[7f01fc2d] Could not associate brick
>>>>>>>>>>>>>>>
'10.104.0.1:/gluster2/brick/brick2' of volume
>>>>>>>>>>>>>>>
'68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster
>>>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>>>> 2018-05-10 11:01:51,924+02
WARN
>>>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>>>>>>>>>>>>>> (DefaultQuartzScheduler6)
[7f01fc2d] Could not associate brick
>>>>>>>>>>>>>>>
'10.104.0.1:/gluster2/brick/brick3' of volume
>>>>>>>>>>>>>>>
'68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster
>>>>>>>>>>>>>>> network found in cluster
'59c10db3-0324-0320-0120-000000000339'
>>>>>>>>>>>>>>> 2018-05-10 11:01:51,925+02
INFO
>>>>>>>>>>>>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>>>>>>>>>>>>> (DefaultQuartzScheduler6)
[7f01fc2d] FINISH, GlusterVolumesListVDSCommand,
>>>>>>>>>>>>>>> return:
>>>>>>>>>>>>>>>
{68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d,
>>>>>>>>>>>>>>>
e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b},
>>>>>>>>>>>>>>> log id: 738a7261
>>>>>>>>>>>>>>> This
happening continuously.
>>>>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>
>>>
Tibor
>>>>>>>>>>>>>>> -----
2018. máj.. 10., 10:56, Sahina Bose < [ mailto:sabose@redhat.com |
>>>>>>>>>>>>>>> sabose(a)redhat.com ] >
írta:
>>>>>>>>>>>>>>>> Could
you check the engine.log if there are errors related to getting
>>>>>>>>>>>>>>>>
GlusterVolumeAdvancedDetails ?
>>>>>>>>>>>>>>>> On
Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdemeter@itsmart.hu |
>>>>>>>>>>>>>>>> tdemeter(a)itsmart.hu ]
> wrote:
>>>>>>>>>>>>>>>>>
Dear Ovirt Users,
>>>>>>>>>>>>>>>>> I've followed up
the self-hosted-engine upgrade documentation, I upgraded my 4.1
>>>>>>>>>>>>>>>>> system to 4.2.3.
>>>>>>>>>>>>>>>>> I upgaded the first
node with yum upgrade, it seems working now fine. But since
>>>>>>>>>>>>>>>>> upgrade, the gluster
informations seems to displayed incorrect on the admin
>>>>>>>>>>>>>>>>> panel. The volume
yellow, and there are red bricks from that node.
>>>>>>>>>>>>>>>>> I've checked in
console, I think my gluster is not degraded:
>>>>>>>>>>>>>>>>>
root@n1 ~]# gluster volume list
>>>>>>>>>>>>>>>>> volume1
>>>>>>>>>>>>>>>>> volume2
>>>>>>>>>>>>>>>>> [root@n1 ~]# gluster
volume info
>>>>>>>>>>>>>>>>> Volume Name: volume1
>>>>>>>>>>>>>>>>> Type:
Distributed-Replicate
>>>>>>>>>>>>>>>>> Volume ID:
e0f568fa-987c-4f5c-b853-01bce718ee27
>>>>>>>>>>>>>>>>> Status: Started
>>>>>>>>>>>>>>>>> Snapshot Count: 0
>>>>>>>>>>>>>>>>> Number of Bricks: 3 x
3 = 9
>>>>>>>>>>>>>>>>> Transport-type: tcp
>>>>>>>>>>>>>>>>> Bricks:
>>>>>>>>>>>>>>>>> Brick1:
10.104.0.1:/gluster/brick/brick1
>>>>>>>>>>>>>>>>> Brick2:
10.104.0.2:/gluster/brick/brick1
>>>>>>>>>>>>>>>>> Brick3:
10.104.0.3:/gluster/brick/brick1
>>>>>>>>>>>>>>>>> Brick4:
10.104.0.1:/gluster/brick/brick2
>>>>>>>>>>>>>>>>> Brick5:
10.104.0.2:/gluster/brick/brick2
>>>>>>>>>>>>>>>>> Brick6:
10.104.0.3:/gluster/brick/brick2
>>>>>>>>>>>>>>>>> Brick7:
10.104.0.1:/gluster/brick/brick3
>>>>>>>>>>>>>>>>> Brick8:
10.104.0.2:/gluster/brick/brick3
>>>>>>>>>>>>>>>>> Brick9:
10.104.0.3:/gluster/brick/brick3
>>>>>>>>>>>>>>>>> Options
Reconfigured:
>>>>>>>>>>>>>>>>>
transport.address-family: inet
>>>>>>>>>>>>>>>>>
performance.readdir-ahead: on
>>>>>>>>>>>>>>>>> nfs.disable: on
>>>>>>>>>>>>>>>>> storage.owner-uid:
36
>>>>>>>>>>>>>>>>> storage.owner-gid:
36
>>>>>>>>>>>>>>>>>
performance.quick-read: off
>>>>>>>>>>>>>>>>>
performance.read-ahead: off
>>>>>>>>>>>>>>>>> performance.io-cache:
off
>>>>>>>>>>>>>>>>>
performance.stat-prefetch: off
>>>>>>>>>>>>>>>>>
performance.low-prio-threads: 32
>>>>>>>>>>>>>>>>> network.remote-dio:
enable
>>>>>>>>>>>>>>>>> cluster.eager-lock:
enable
>>>>>>>>>>>>>>>>> cluster.quorum-type:
auto
>>>>>>>>>>>>>>>>>
cluster.server-quorum-type: server
>>>>>>>>>>>>>>>>>
cluster.data-self-heal-algorithm: full
>>>>>>>>>>>>>>>>>
cluster.locking-scheme: granular
>>>>>>>>>>>>>>>>>
cluster.shd-max-threads: 8
>>>>>>>>>>>>>>>>>
cluster.shd-wait-qlength: 10000
>>>>>>>>>>>>>>>>> features.shard: on
>>>>>>>>>>>>>>>>> user.cifs: off
>>>>>>>>>>>>>>>>>
server.allow-insecure: on
>>>>>>>>>>>>>>>>> Volume Name: volume2
>>>>>>>>>>>>>>>>> Type:
Distributed-Replicate
>>>>>>>>>>>>>>>>> Volume ID:
68cfb061-1320-4042-abcd-9228da23c0c8
>>>>>>>>>>>>>>>>> Status: Started
>>>>>>>>>>>>>>>>> Snapshot Count: 0
>>>>>>>>>>>>>>>>> Number of Bricks: 3 x
3 = 9
>>>>>>>>>>>>>>>>> Transport-type: tcp
>>>>>>>>>>>>>>>>> Bricks:
>>>>>>>>>>>>>>>>> Brick1:
10.104.0.1:/gluster2/brick/brick1
>>>>>>>>>>>>>>>>> Brick2:
10.104.0.2:/gluster2/brick/brick1
>>>>>>>>>>>>>>>>> Brick3:
10.104.0.3:/gluster2/brick/brick1
>>>>>>>>>>>>>>>>> Brick4:
10.104.0.1:/gluster2/brick/brick2
>>>>>>>>>>>>>>>>> Brick5:
10.104.0.2:/gluster2/brick/brick2
>>>>>>>>>>>>>>>>> Brick6:
10.104.0.3:/gluster2/brick/brick2
>>>>>>>>>>>>>>>>> Brick7:
10.104.0.1:/gluster2/brick/brick3
>>>>>>>>>>>>>>>>> Brick8:
10.104.0.2:/gluster2/brick/brick3
>>>>>>>>>>>>>>>>> Brick9:
10.104.0.3:/gluster2/brick/brick3
>>>>>>>>>>>>>>>>> Options
Reconfigured:
>>>>>>>>>>>>>>>>> nfs.disable: on
>>>>>>>>>>>>>>>>>
performance.readdir-ahead: on
>>>>>>>>>>>>>>>>>
transport.address-family: inet
>>>>>>>>>>>>>>>>> cluster.quorum-type:
auto
>>>>>>>>>>>>>>>>> network.ping-timeout:
10
>>>>>>>>>>>>>>>>> auth.allow: *
>>>>>>>>>>>>>>>>>
performance.quick-read: off
>>>>>>>>>>>>>>>>>
performance.read-ahead: off
>>>>>>>>>>>>>>>>> performance.io-cache:
off
>>>>>>>>>>>>>>>>>
performance.stat-prefetch: off
>>>>>>>>>>>>>>>>>
performance.low-prio-threads: 32
>>>>>>>>>>>>>>>>> network.remote-dio:
enable
>>>>>>>>>>>>>>>>> cluster.eager-lock:
enable
>>>>>>>>>>>>>>>>>
cluster.server-quorum-type: server
>>>>>>>>>>>>>>>>>
cluster.data-self-heal-algorithm: full
>>>>>>>>>>>>>>>>>
cluster.locking-scheme: granular
>>>>>>>>>>>>>>>>>
cluster.shd-max-threads: 8
>>>>>>>>>>>>>>>>>
cluster.shd-wait-qlength: 10000
>>>>>>>>>>>>>>>>> features.shard: on
>>>>>>>>>>>>>>>>> user.cifs: off
>>>>>>>>>>>>>>>>> storage.owner-uid:
36
>>>>>>>>>>>>>>>>> storage.owner-gid:
36
>>>>>>>>>>>>>>>>>
server.allow-insecure: on
>>>>>>>>>>>>>>>>> [root@n1 ~]# gluster
volume status
>>>>>>>>>>>>>>>>> Status of volume:
volume1
>>>>>>>>>>>>>>>>> Gluster process TCP
Port RDMA Port Online Pid
>>>>>>>>>>>>>>>>>
------------------------------------------------------------------------------
>>>>>>>>>>>>>>>>> Brick
10.104.0.1:/gluster/brick/brick1 49152 0 Y 3464
>>>>>>>>>>>>>>>>> Brick
10.104.0.2:/gluster/brick/brick1 49152 0 Y 68937
>>>>>>>>>>>>>>>>> Brick
10.104.0.3:/gluster/brick/brick1 49161 0 Y 94506
>>>>>>>>>>>>>>>>> Brick
10.104.0.1:/gluster/brick/brick2 49153 0 Y 3457
>>>>>>>>>>>>>>>>> Brick
10.104.0.2:/gluster/brick/brick2 49153 0 Y 68943
>>>>>>>>>>>>>>>>> Brick
10.104.0.3:/gluster/brick/brick2 49162 0 Y 94514
>>>>>>>>>>>>>>>>> Brick
10.104.0.1:/gluster/brick/brick3 49154 0 Y 3465
>>>>>>>>>>>>>>>>> Brick
10.104.0.2:/gluster/brick/brick3 49154 0 Y 68949
>>>>>>>>>>>>>>>>> Brick
10.104.0.3:/gluster/brick/brick3 49163 0 Y 94520
>>>>>>>>>>>>>>>>> Self-heal Daemon on
localhost N/A N/A Y 54356
>>>>>>>>>>>>>>>>> Self-heal Daemon on
10.104.0.2 N/A N/A Y 962
>>>>>>>>>>>>>>>>> Self-heal Daemon on
10.104.0.3 N/A N/A Y 108977
>>>>>>>>>>>>>>>>> Self-heal Daemon on
10.104.0.4 N/A N/A Y 61603
>>>>>>>>>>>>>>>>> Task Status of Volume
volume1
>>>>>>>>>>>>>>>>>
------------------------------------------------------------------------------
>>>>>>>>>>>>>>>>> There are no active
volume tasks
>>>>>>>>>>>>>>>>> Status of volume:
volume2
>>>>>>>>>>>>>>>>> Gluster process TCP
Port RDMA Port Online Pid
>>>>>>>>>>>>>>>>>
------------------------------------------------------------------------------
>>>>>>>>>>>>>>>>> Brick
10.104.0.1:/gluster2/brick/brick1 49155 0 Y 3852
>>>>>>>>>>>>>>>>> Brick
10.104.0.2:/gluster2/brick/brick1 49158 0 Y 68955
>>>>>>>>>>>>>>>>> Brick
10.104.0.3:/gluster2/brick/brick1 49164 0 Y 94527
>>>>>>>>>>>>>>>>> Brick
10.104.0.1:/gluster2/brick/brick2 49156 0 Y 3851
>>>>>>>>>>>>>>>>> Brick
10.104.0.2:/gluster2/brick/brick2 49159 0 Y 68961
>>>>>>>>>>>>>>>>> Brick
10.104.0.3:/gluster2/brick/brick2 49165 0 Y 94533
>>>>>>>>>>>>>>>>> Brick
10.104.0.1:/gluster2/brick/brick3 49157 0 Y 3883
>>>>>>>>>>>>>>>>> Brick
10.104.0.2:/gluster2/brick/brick3 49160 0 Y 68968
>>>>>>>>>>>>>>>>> Brick
10.104.0.3:/gluster2/brick/brick3 49166 0 Y 94541
>>>>>>>>>>>>>>>>> Self-heal Daemon on
localhost N/A N/A Y 54356
>>>>>>>>>>>>>>>>> Self-heal Daemon on
10.104.0.2 N/A N/A Y 962
>>>>>>>>>>>>>>>>> Self-heal Daemon on
10.104.0.3 N/A N/A Y 108977
>>>>>>>>>>>>>>>>> Self-heal Daemon on
10.104.0.4 N/A N/A Y 61603
>>>>>>>>>>>>>>>>> Task Status of Volume
volume2
>>>>>>>>>>>>>>>>>
------------------------------------------------------------------------------
>>>>>>>>>>>>>>>>> There are no active
volume tasks
>>>>>>>>>>>>>>>>> I think ovirt
can't read valid informations about gluster.
>>>>>>>>>>>>>>>>> I can't contiune
upgrade of other hosts until this problem exist.
>>>>>>>>>>>>>>>>>
Please help me:)
>>>>>>>>>>>>>>
>>>
Thanks
>>>>>>>>>>>
>>>>>> Regards,
>>>>>>>>>>>>>>
>>>
Tibor
>>>>>>>>>>>>>>>>>
_______________________________________________
>>>>>>>>>>>>>>>>> Users mailing list --
[ mailto:users@ovirt.org | users(a)ovirt.org ]
>>>>>>>>>>>>>>>>> To unsubscribe send
an email to [ mailto:users-leave@ovirt.org |
>>>>>>>>>>>>>>>>> users-leave(a)ovirt.org
]
>>>>>>>>>>>
_______________________________________________
>>>>>>>>>>> Users mailing list -- [
mailto:users@ovirt.org | users(a)ovirt.org ]
>>>>>>>>>>> To unsubscribe send an email to [
mailto:users-leave@ovirt.org |
>>>>>>>>>>> users-leave(a)ovirt.org ]
>>>>>>>>
_______________________________________________
>>>>>>>> Users mailing list -- [ mailto:users@ovirt.org |
users(a)ovirt.org ]
>>>>>>>> To unsubscribe send an email to [
mailto:users-leave@ovirt.org |
>>>>>>>> users-leave(a)ovirt.org ]
>>>>>>>
_______________________________________________
>>>>>>> Users mailing list -- [ mailto:users@ovirt.org |
users(a)ovirt.org ]
>>>>>>> To unsubscribe send an email to [
mailto:users-leave@ovirt.org |
>>>>>>> users-leave(a)ovirt.org ]
>>>>>>> oVirt Code of Conduct: [
>>>>>>>
https://www.ovirt.org/community/about/community-guidelines/
|
>>>>>>>
https://www.ovirt.org/community/about/community-guidelines/
]
>>>>>>> List Archives: