[ovirt-users] Missing dom_md/ids file

Nir Soffer nsoffer at redhat.com
Fri Feb 19 22:24:52 UTC 2016


On Fri, Feb 19, 2016 at 11:54 PM, Cameron Christensen
<cameron.christensen at uk2group.com> wrote:
> I cannot put the gluster domain into maintenance. I'm believe this is
> because the data center has a status of Non responsive (because a host
> cannot connect to storage or start SPM). The only option available on the
> gluster storage is activate. I have put all the hosts into maintenance. Is
> this enough to continue with the initialize lockspace step?

Yes, if all hosts are in maintenance, no host will access the gluster
storage domain, and you can repair is safely.

>
> On Fri, 2016-02-19 at 23:34 +0200, Nir Soffer wrote:
>
> On Fri, Feb 19, 2016 at 10:58 PM, Cameron Christensen
> <cameron.christensen at uk2group.com> wrote:
>
> Hello,
>
> I am using glusterfs storage and ran into a split-brain issue. One of the
> file affected by split-brain was dom_md/ids. In attempts to fix the
> split-brain issue I deleted the dom_md/ids file. Is there a method to
> recreate or reconstruct this file?
>
>
> You can do this:
>
> 1. Put the gluster domain to maintenance (via engine)
>
> No host should access it while you reconstruct the ids file
>
> 2. Mount the gluster volume manually
>
> mkdir repair
> mount -t glusterfs <server>:/<path> repair/
>
> 3. Create the file:
>
> touch repair/<sd_uuid>/dom_md/ids
>
> 4. Initialize the lockspace
>
> sanlock direct init -s <sd_uuid>:0:repair/<sd_uuid>/dom_md/ids:0
>
> 5. Unmount the gluster volume
>
> umount repair
>
> 6. Activate the gluster domain (via engine)
>
> The domain should become active after a while.
>
>
> David: can you confirm this is the best way to reconstruct the ids file?
>
> Nir



More information about the Users mailing list