
Hi Nir, I have a thread open on the gluster side about heal-failed operations, so I'll wait for a response on that side. Agreed on two node quorum, I'm waiting for a 3rd node right now :) But in the meantime or for anyone who reads this thread, if you only have 2 storage nodes you have to weigh the risks of 2 nodes in quorum ensuring storage consistency, or 2 nodes no quorum with an extra shot at uptime. *Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | *Rethink Traffic* *Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook <https://www.facebook.com/miovision>* ------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. On Wed, Feb 19, 2014 at 4:13 AM, Nir Soffer <nsoffer@redhat.com> wrote:
----- Original Message -----
From: "Steve Dainard" <sdainard@miovision.com> To: "Nir Soffer" <nsoffer@redhat.com> Cc: "users" <users@ovirt.org> Sent: Tuesday, February 11, 2014 7:42:37 PM Subject: Re: [Users] Ovirt 3.3.2 Cannot attach POSIX (gluster) storage domain
Enabled logging, logs attached.
According to sanlock and gluster log:
1. on the host, sanlock is failing write to the ids volume 2. on the gluster side we see failure to heal the ids file.
This looks like glusterfs issue, and should be handled by glusterfs folks.
You probably should configure sanlock log level back to the default by commenting out the configuration I suggested in the previous mail.
According to gluster configuration in this log, this looks like 2 replicas with auto quorum. This setup is not recommended because both machines must be up all the time. When one machine is down, your entire storage is down.
Check this post explaining this issue: http://lists.ovirt.org/pipermail/users/2014-February/021541.html
Thanks, Nir