[Users] Issues using local storage for gluster shared volume

I have been trying for a month or so to get a 2 node cluster up and running. I have engine installed on the first node, then add each each system as a host to a posix dc. Both boxes have 4 data disks. After adding the hosts I create a distributed replicate volume using 3 disk from each host with ext4 filesystems. I click the 'optimize for virt' option on the volume. There is a message in events that says that it can't set a volume option, then it sets 2 volume options. Checking the options tab I see that it added the gid/uid options. I was unable to find in the logs what option was not set, I just see a message about usage for volume set <volname> <option>. The volume starts fine and I am able to create a data domain on the volume. Once the domain is created I try to create a vm and it fails creating the disk. Error messages are along the lines of task file exists and can't remove task files. There are directories under tasks and when trying to manually remove them I get the "directory not empty" error. Can someone please shed some light on what I am doing wrong to get this 2 node cluster with local disk as shared storage up and running? Thanks, Tony

----- Original Message -----
From: "Tony Feldmann" <trfeldmann@gmail.com> To: users@ovirt.org Sent: Thursday, March 28, 2013 8:19:17 PM Subject: [Users] Issues using local storage for gluster shared volume
I have been trying for a month or so to get a 2 node cluster up and running. I have engine installed on the first node, then add each each system as a host to a posix dc. Both boxes have 4 data disks. After adding the hosts I create a distributed replicate volume using 3 disk from each host with ext4 filesystems. I click the 'optimize for virt' option on the volume. There is a message in events that says that it can't set a volume option, then it sets 2 volume options. Checking the options tab I see that it added the gid/uid options. I was unable to find in the logs what option was not set, I just see a message about usage for volume set <volname> <option>.
gid and uid options are enough to make a gluster volume ready for virt store. The third option sets a group(called as virt group) of options on the volume mainly related to performance tuning. To make this option work, you have to copy the file https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example to /var/lib/glusterd/groups/ and name it as virt. Now you can click on 'Optimize for virt store' again to set the virt group. Setting this group option recommended but not necessary to make the gluster volume to be used as virt store. I am not sure about the below errors, other people in the list can help you out. Thanks, Kanagaraj
The volume starts fine and I am able to create a data domain on the volume. Once the domain is created I try to create a vm and it fails creating the disk. Error messages are along the lines of task file exists and can't remove task files. There are directories under tasks and when trying to manually remove them I get the "directory not empty" error. Can someone please shed some light on what I am doing wrong to get this 2 node cluster with local disk as shared storage up and running?
Thanks,
Tony _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 03/28/2013 08:19 PM, Tony Feldmann wrote:
I have been trying for a month or so to get a 2 node cluster up and running. I have engine installed on the first node, then add each each system as a host to a posix dc. Both boxes have 4 data disks. After adding the hosts I create a distributed replicate volume using 3 disk from each host with ext4 filesystems. I click the 'optimize for virt' option on the volume. There is a message in events that says that it can't set a volume option, then it sets 2 volume options. Checking the options tab I see that it added the gid/uid options. I was unable to find in the logs what option was not set, I just see a message about usage for volume set <volname> <option>. The volume starts fine and I am able to create a data domain on the volume. Once the domain is created I try to create a vm and it fails creating the disk. Error messages are along the lines of task file exists and can't remove task files. There are directories under tasks and when trying to manually remove them I get the "directory not empty" error. Can someone please shed some light on what I am doing wrong to get this 2 node cluster with local disk as shared storage up and running?
There are known problems with ext4 and gluster at the moment. Can you please confirm if you see similar behaviour with xfs and gluster? Thanks, Vijay

Aren't there concerns with xfs and large files in cases of failures? I was under the impression that if xfs was writing to a file and the system died it would zero out the entire file. Just hesitant to put large vm files on a fs like that. Is this still an issue with xfs? On Fri, Mar 29, 2013 at 1:08 AM, Vijay Bellur <vbellur@redhat.com> wrote:
On 03/28/2013 08:19 PM, Tony Feldmann wrote:
I have been trying for a month or so to get a 2 node cluster up and running. I have engine installed on the first node, then add each each system as a host to a posix dc. Both boxes have 4 data disks. After adding the hosts I create a distributed replicate volume using 3 disk from each host with ext4 filesystems. I click the 'optimize for virt' option on the volume. There is a message in events that says that it can't set a volume option, then it sets 2 volume options. Checking the options tab I see that it added the gid/uid options. I was unable to find in the logs what option was not set, I just see a message about usage for volume set <volname> <option>. The volume starts fine and I am able to create a data domain on the volume. Once the domain is created I try to create a vm and it fails creating the disk. Error messages are along the lines of task file exists and can't remove task files. There are directories under tasks and when trying to manually remove them I get the "directory not empty" error. Can someone please shed some light on what I am doing wrong to get this 2 node cluster with local disk as shared storage up and running?
There are known problems with ext4 and gluster at the moment. Can you please confirm if you see similar behaviour with xfs and gluster?
Thanks, Vijay

On 03/29/2013 07:19 PM, Tony Feldmann wrote:
Aren't there concerns with xfs and large files in cases of failures? I was under the impression that if xfs was writing to a file and the system died it would zero out the entire file. Just hesitant to put large vm files on a fs like that. Is this still an issue with xfs?
There are no known problems with recent kernels. There are quite a few enterprise storage solutions that run on xfs. Thanks, Vijay
On Fri, Mar 29, 2013 at 1:08 AM, Vijay Bellur <vbellur@redhat.com <mailto:vbellur@redhat.com>> wrote:
On 03/28/2013 08:19 PM, Tony Feldmann wrote:
I have been trying for a month or so to get a 2 node cluster up and running. I have engine installed on the first node, then add each each system as a host to a posix dc. Both boxes have 4 data disks. After adding the hosts I create a distributed replicate volume using 3 disk from each host with ext4 filesystems. I click the 'optimize for virt' option on the volume. There is a message in events that says that it can't set a volume option, then it sets 2 volume options. Checking the options tab I see that it added the gid/uid options. I was unable to find in the logs what option was not set, I just see a message about usage for volume set <volname> <option>. The volume starts fine and I am able to create a data domain on the volume. Once the domain is created I try to create a vm and it fails creating the disk. Error messages are along the lines of task file exists and can't remove task files. There are directories under tasks and when trying to manually remove them I get the "directory not empty" error. Can someone please shed some light on what I am doing wrong to get this 2 node cluster with local disk as shared storage up and running?
There are known problems with ext4 and gluster at the moment. Can you please confirm if you see similar behaviour with xfs and gluster?
Thanks, Vijay

Great, I put things on xfs this weekend and all seems to be running fine. Thanks for the info. On Mon, Apr 1, 2013 at 2:45 AM, Vijay Bellur <vbellur@redhat.com> wrote:
On 03/29/2013 07:19 PM, Tony Feldmann wrote:
Aren't there concerns with xfs and large files in cases of failures? I was under the impression that if xfs was writing to a file and the system died it would zero out the entire file. Just hesitant to put large vm files on a fs like that. Is this still an issue with xfs?
There are no known problems with recent kernels. There are quite a few enterprise storage solutions that run on xfs.
Thanks, Vijay
On Fri, Mar 29, 2013 at 1:08 AM, Vijay Bellur <vbellur@redhat.com <mailto:vbellur@redhat.com>> wrote:
On 03/28/2013 08:19 PM, Tony Feldmann wrote:
I have been trying for a month or so to get a 2 node cluster up and running. I have engine installed on the first node, then add each each system as a host to a posix dc. Both boxes have 4 data disks. After adding the hosts I create a distributed replicate volume using 3 disk from each host with ext4 filesystems. I click the 'optimize for virt' option on the volume. There is a message in events that says that it can't set a volume option, then it sets 2 volume options. Checking the options tab I see that it added the gid/uid options. I was unable to find in the logs what option was not set, I just see a message about usage for volume set <volname> <option>. The volume starts fine and I am able to create a data domain on the volume. Once the domain is created I try to create a vm and it fails creating the disk. Error messages are along the lines of task file exists and can't remove task files. There are directories under tasks and when trying to manually remove them I get the "directory not empty" error. Can someone please shed some light on what I am doing wrong to get this 2 node cluster with local disk as shared storage up and running?
There are known problems with ext4 and gluster at the moment. Can you please confirm if you see similar behaviour with xfs and gluster?
Thanks, Vijay
participants (3)
-
Kanagaraj Mayilsamy
-
Tony Feldmann
-
Vijay Bellur