<html><head><style>body{font-family:Helvetica,Arial;font-size:13px}</style></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><div id="bloop_customfont" style="font-family:Helvetica,Arial;font-size:13px; color: rgba(0,0,0,1.0); margin: 0px; line-height: auto;">Hi Paul,</div><div id="bloop_customfont" style="font-family:Helvetica,Arial;font-size:13px; color: rgba(0,0,0,1.0); margin: 0px; line-height: auto;"><br></div><div id="bloop_customfont" style="font-family:Helvetica,Arial;font-size:13px; color: rgba(0,0,0,1.0); margin: 0px; line-height: auto;">I would prefer to do a direct mount for local disk. However I am not certain how to configure a single system with both local storage and bluster replicated storage.</div><div id="bloop_customfont" style="font-family:Helvetica,Arial;font-size:13px; color: rgba(0,0,0,1.0); margin: 0px; line-height: auto;"><br></div><div id="bloop_customfont" style="font-family:Helvetica,Arial;font-size:13px; color: rgba(0,0,0,1.0); margin: 0px; line-height: auto;">- The “Configure Local Storage” option for Hosts wants to make a datacenter and cluster for the system. I presume that’s because oVirt wants to be able to mount the storage on all hosts in a datacenter. </div> <div id="bloop_sign_1409339037036428032" class="bloop_sign"><div style="font-family:helvetica,arial;font-size:13px"><br></div><div style="font-family:helvetica,arial;font-size:13px">- Configuring a POSIX storage domain with local disk does not work as oVirt wants to mount the disk on all systems in the datacenter. </div><div style="font-family:helvetica,arial;font-size:13px"><br></div><div style="font-family:helvetica,arial;font-size:13px">I suppose my third option would be to put these systems as libvirt VMs and not manage them with oVirt. This is fairly reasonable as I use Foreman for provisioning except that I will need to figure out how to make them co-exist. Has anyone tried this?</div><div style="font-family:helvetica,arial;font-size:13px"><br></div><div style="font-family:helvetica,arial;font-size:13px">Am I missing other options for local non-replicated disk? </div><div style="font-family:helvetica,arial;font-size:13px"><br></div><div style="font-family:helvetica,arial;font-size:13px">Thanks,</div><div style="font-family:helvetica,arial;font-size:13px">David</div><div style="font-family:helvetica,arial;font-size:13px"><br></div><div style="font-family:helvetica,arial;font-size:13px">-- <br>David King<br></div></div><p style="color:#000;">On August 29, 2014 at 3:01:49 PM, Paul Robert Marino (<a href="mailto:prmarino1@gmail.com">prmarino1@gmail.com</a>) wrote:</p> <blockquote type="cite" class="clean_bq"><span><div><div></div><div>On Fri, Aug 29, 2014 at 12:25 PM, Vijay Bellur <vbellur@redhat.com> wrote:
<br>> On 08/29/2014 07:34 PM, David King wrote:
<br>>>
<br>>> Paul,
<br>>>
<br>>> Thanks for the response.
<br>>>
<br>>> You mention that the issue is orphaned files during updates when one
<br>>> node is down. However I am less concerned about adding and removing
<br>>> files because the file server will be predominately VM disks so the file
<br>>> structure is fairly static. Those VM files will be quite active however
<br>>> - will gluster be able to keep track of partial updates to a large file
<br>>> when one out of two bricks are down?
<br>>>
<br>>
<br>> Yes, gluster only updates regions of the file that need to be synchronized
<br>> during self-healing. More details on this synchronization can be found in
<br>> the self-healing section of afr's design document [1].
<br>>
<br>>
<br>>> Right now I am leaning towards using SSD for "host local" disk - single
<br>>> brick gluster volumes intended for VMs which are node specific and then
<br>
<br>I wouldn't use single brick gluster volumes for local disk you don't
<br>need it and it will actually make it more complicated with no real
<br>benefits.
<br>
<br>>> 3 way replicas for the higher availability zones which tend to be more
<br>>> read oriented. I presume that read-only access only needs to get data
<br>>> from one of the 3 replicas so that should be reasonably performant.
<br>>
<br>>
<br>> Yes, read operations are directed to only one of the replicas.
<br>>
<br>> Regards,
<br>> Vijay
<br>>
<br>> [1] https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md
<br>>
<br></div></div></span></blockquote></body></html>