On August 29, 2014 at 3:01:49 PM, Paul Robert Marino (prmarino1@gmail.com) wrote:
On Fri, Aug 29, 2014 at 12:25 PM, Vijay Bellur <vbellur@redhat.com> wrote:
> On 08/29/2014 07:34 PM, David King wrote:
>>
>> Paul,
>>
>> Thanks for the response.
>>
>> You mention that the issue is orphaned files during updates when one
>> node is down. However I am less concerned about adding and removing
>> files because the file server will be predominately VM disks so the file
>> structure is fairly static. Those VM files will be quite active however
>> - will gluster be able to keep track of partial updates to a large file
>> when one out of two bricks are down?
>>
>
> Yes, gluster only updates regions of the file that need to be synchronized
> during self-healing. More details on this synchronization can be found in
> the self-healing section of afr's design document [1].
>
>
>> Right now I am leaning towards using SSD for "host local" disk - single
>> brick gluster volumes intended for VMs which are node specific and then
I wouldn't use single brick gluster volumes for local disk you don't
need it and it will actually make it more complicated with no real
benefits.
>> 3 way replicas for the higher availability zones which tend to be more
>> read oriented. I presume that read-only access only needs to get data
>> from one of the 3 replicas so that should be reasonably performant.
>
>
> Yes, read operations are directed to only one of the replicas.
>
> Regards,
> Vijay
>
> [1] https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md
>