Aren't there concerns with xfs and large files in cases of failures?  I was under the impression that if xfs was writing to a file and the system died it would zero out the entire file.  Just hesitant to put large vm files on a fs like that.  Is this still an issue with xfs?

On Fri, Mar 29, 2013 at 1:08 AM, Vijay Bellur <vbellur@redhat.com> wrote:
On 03/28/2013 08:19 PM, Tony Feldmann wrote:
I have been trying for a month or so to get a 2 node cluster up and
running.  I have engine installed on the first node, then add each each
system as a host to a posix dc.  Both boxes have 4 data disks.  After
adding the hosts I create a distributed replicate volume using 3 disk
from each host with ext4 filesystems. I click the 'optimize for virt'
option on the volume.  There is a message in events that says that it
can't set a volume option, then it sets 2 volume options.  Checking the
options tab I see that it added the gid/uid options.  I was unable to
find in the logs what option was not set, I just see a message about
usage for volume set <volname> <option>.  The volume starts fine and I
am able to create a data domain on the volume.  Once the domain is
created I try to create a vm and it fails creating the disk.  Error
messages are along the lines of task file exists and can't remove task
files.  There are directories under tasks and when trying to manually
remove them I get the "directory not empty" error.  Can someone please
shed some light on what I am doing wrong to get this 2 node cluster with
local disk as shared storage up and running?


There are known problems with ext4 and gluster at the moment. Can you please confirm if you see similar behaviour with xfs and gluster?

Thanks,
Vijay