[ovirt-users] post glusterfs 3.4 -> 3.5 upgrade issue in ovirt (3.4.0-1.fc19): bricks unavailable
Alastair Neil
ajneil.tech at gmail.com
Wed May 28 11:06:39 EDT 2014
I just noticed this in the console and I don't know if it is relevant.
When I look at the "General" tab on the hosts under "GlusterFS Version" it
shows "N/A".
On 28 May 2014 11:03, Alastair Neil <ajneil.tech at gmail.com> wrote:
> ovirt version is 3.4. I did have a slightly older version of vdsm on
> gluster0 but I have updated it and the issue persists. The compatibility
> version on the storage cluster is 3.3.
>
> I checked the logs for GlusterSyncJob notifications and there are none.
>
>
>
>
>
>
>
> On 28 May 2014 10:19, Sahina Bose <sabose at redhat.com> wrote:
>
>> Hi Alastair,
>>
>> This could be a mismatch in the hostname identified in ovirt and gluster.
>>
>> You could check for any exceptions from GlusterSyncJob in engine.log.
>>
>> Also, what version of ovirt are you using. And the compatibility version
>> of your cluster?
>>
>>
>> On 05/28/2014 12:40 AM, Alastair Neil wrote:
>>
>> Hi thanks for the reply. Here is an extract from a grep I ran on the
>> vdsm log grepping for the volume name vm-store. It seems to indicate the
>> bricks are ONLINE.
>>
>> I am uncertain how to extract meaningful information from the
>> engine.log can you provide some guidance?
>>
>> Thanks,
>>
>> Alastair
>>
>>
>>
>>> Thread-100::DEBUG::2014-05-27
>>> 15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client
>>> [129.174.94.239]::call volumeStatus with ('vm-store', '', '') {}
>>> Thread-100::DEBUG::2014-05-27
>>> 15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return volumeStatus with
>>> {'volumeStatus': {'bricks': [{'status': 'ONLINE', 'brick':
>>> 'gluster0:/export/brick0', 'pid': '2675', 'port': '49158', 'hostuuid':
>>> 'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE', 'brick':
>>> 'gluster1:/export/brick4/vm-store', 'pid': '2309', 'port': '49158',
>>> 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status':
>>> 'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port': '2049',
>>> 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE',
>>> 'hostname': 'gluster0', 'pid': '12875', 'port': '2049', 'hostuuid':
>>> 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status': 'ONLINE',
>>> 'hostname': '129.174.126.56', 'pid': '27019', 'hostuuid':
>>> '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status': 'ONLINE', 'hostname':
>>> 'gluster0', 'pid': '12882', 'hostuuid':
>>> 'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'}, 'status':
>>> {'message': 'Done', 'code': 0}}
>>> Thread-16::DEBUG::2014-05-27
>>> 15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:02:25,657::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:02:35,698::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:02:45,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:02:55,784::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:03:05,827::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:03:15,869::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:03:25,910::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:03:35,953::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:03:45,996::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:03:56,037::fileSD::225::Storage.Misc.excCmd::(getR7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:04:06,078::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:04:16,107::fileSD::140::Storage.StorageDomain::(__init__) Reading domain
>>> in path
>>> /rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8
>>> Thread-16::DEBUG::2014-05-27
>>> 15:04:16,126::persistentDict::234::Storage.PersistentDict::(refresh) read
>>> lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=Gluster-VM-Store',
>>> 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
>>> 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=VS-VM',
>>> 'POOL_DOMAINS=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8:Active,6d1e2f10-e6ec-42ce-93d5-ee93e8eeeb10:Active',
>>> 'POOL_SPM_ID=3', 'POOL_SPM_LVER=7',
>>> 'POOL_UUID=9a0b5f4a-4a0f-432c-b70c-53fd5643cbb7',
>>> 'REMOTE_PATH=gluster0:vm-store', 'ROLE=Master',
>>> 'SDUUID=6d637c7f-a4ab-4510-a0d9-63a04c55d6d8', 'TYPE=GLUSTERFS',
>>> 'VERSION=3', '_SHA_CKSUM=8e747f0ebf360f1db6801210c574405dd71fe731']
>>> Thread-16::DEBUG::2014-05-27
>>> 15:04:16,153::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:04:26,196::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd None)
>>> Thread-16::DEBUG::2014-05-27
>>> 15:04:36,238::fileSD::225::Storage.Misc.excCmd::(getReadDelay) '/bin/dd
>>> iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
>>> bs=4096 count=1' (cwd eadDelay) '/bin/dd iflag=direct
>>> if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637cNone)
>>
>>
>>
>>
>> On 21 May 2014 23:51, Kanagaraj <kmayilsa at redhat.com> wrote:
>>
>>> engine.log and vdsm.log?
>>>
>>> This can mostly happen due to following reasons
>>> - "gluster volume status vm-store" is not consistently returning the
>>> right output
>>> - ovirt-engine is not able to identify the bricks properly
>>>
>>> Anyway, engine.log will give better clarity.
>>>
>>>
>>>
>>> On 05/22/2014 02:24 AM, Alastair Neil wrote:
>>>
>>> I just did a rolling upgrade of my gluster storage cluster to the
>>> latest 3.5 bits. This all seems to have gone smoothly and all the volumes
>>> are on line. All volumes are replicated 1x2
>>>
>>> The ovirt console now insists that two of my volumes , including the
>>> vm-store volume with my vm's happily running have no bricks up.
>>>
>>> It reports "Up but all bricks are down"
>>>
>>> This would seem to be impossible. Gluster on the nodes itself
>>> reports no issues
>>>
>>> [root at gluster1 ~]# gluster volume status vm-store
>>>> Status of volume: vm-store
>>>> Gluster process Port Online Pid
>>>>
>>>> ------------------------------------------------------------------------------
>>>> Brick gluster0:/export/brick0/vm-store 49158 Y 2675
>>>> Brick gluster1:/export/brick4/vm-store 49158 Y 2309
>>>> NFS Server on localhost 2049 Y 27012
>>>> Self-heal Daemon on localhost N/A Y 27019
>>>> NFS Server on gluster0 2049 Y 12875
>>>> Self-heal Daemon on gluster0 N/A Y 12882
>>>>
>>>> Task Status of Volume vm-store
>>>>
>>>> ------------------------------------------------------------------------------
>>>> There are no active volume tasks
>>>
>>>
>>>
>>> As I mentioned the vms are running happily
>>> initially the ISOs volume had the same issue. I did a volume start and
>>> stop on the volume as it was not being activly used and that cleared up the
>>> issue in the console. However, as I have VMs running I can't so this for
>>> the vm-store volume.
>>>
>>>
>>> Any suggestions, Alastair
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140528/f2a6b633/attachment-0001.html>
More information about the Users
mailing list