Hi Paul,
I would prefer to do a direct mount for local disk. However I am not certain how to
configure a single system with both local storage and bluster replicated storage.
- The “Configure Local Storage” option for Hosts wants to make a datacenter and cluster
for the system. I presume that’s because oVirt wants to be able to mount the storage on
all hosts in a datacenter.
- Configuring a POSIX storage domain with local disk does not work as oVirt wants to mount
the disk on all systems in the datacenter.
I suppose my third option would be to put these systems as libvirt VMs and not manage them
with oVirt. This is fairly reasonable as I use Foreman for provisioning except that I
will need to figure out how to make them co-exist. Has anyone tried this?
Am I missing other options for local non-replicated disk?
Thanks,
David
--
David King
On August 29, 2014 at 3:01:49 PM, Paul Robert Marino (prmarino1(a)gmail.com) wrote:
On Fri, Aug 29, 2014 at 12:25 PM, Vijay Bellur <vbellur(a)redhat.com> wrote:
On 08/29/2014 07:34 PM, David King wrote:
>
> Paul,
>
> Thanks for the response.
>
> You mention that the issue is orphaned files during updates when one
> node is down. However I am less concerned about adding and removing
> files because the file server will be predominately VM disks so the file
> structure is fairly static. Those VM files will be quite active however
> - will gluster be able to keep track of partial updates to a large file
> when one out of two bricks are down?
>
Yes, gluster only updates regions of the file that need to be synchronized
during self-healing. More details on this synchronization can be found in
the self-healing section of afr's design document [1].
> Right now I am leaning towards using SSD for "host local" disk - single
> brick gluster volumes intended for VMs which are node specific and then
I wouldn't use single brick gluster volumes for local disk you don't
need it and it will actually make it more complicated with no real
benefits.
> 3 way replicas for the higher availability zones which tend to be
more
> read oriented. I presume that read-only access only needs to get data
> from one of the 3 replicas so that should be reasonably performant.
Yes, read operations are directed to only one of the replicas.
Regards,
Vijay
[1]
https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md