
19 Feb
2016
19 Feb
'16
11:54 p.m.
--=-9iJKPIn2UEV1t7IWMWhA Content-Type: multipart/alternative; boundary="=-Lc9OxRAgLPK5zoxTEAOY" --=-Lc9OxRAgLPK5zoxTEAOY Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable I cannot put the gluster domain into maintenance. I'm believe this is because the data center has a status of Non responsive (because a host cannot connect to storage or start SPM). The only option available on the gluster storage is activate. I have put all the hosts into maintenance. Is this enough to continue with the initialize lockspace step? On Fri, 2016-02-19 at 23:34 +0200, Nir Soffer wrote: > On Fri, Feb 19, 2016 at 10:58 PM, Cameron Christensen > <cameron.christensen@uk2group.com> wrote: > > Hello, > >=20 > > I am using glusterfs storage and ran into a split-brain issue. One > > of the > > file affected by split-brain was dom_md/ids. In attempts to fix the > > split-brain issue I deleted the dom_md/ids file. Is there a method > > to > > recreate or reconstruct this file? >=20 > You can do this: >=20 > 1. Put the gluster domain to maintenance (via engine) >=20 > No host should access it while you reconstruct the ids file >=20 > 2. Mount the gluster volume manually >=20 > mkdir repair > mount -t glusterfs <server>:/<path> repair/ >=20 > 3. Create the file: >=20 > touch repair/<sd_uuid>/dom_md/ids >=20 > 4. Initialize the lockspace >=20 > sanlock direct init -s <sd_uuid>:0:repair/<sd_uuid>/dom_md/ids:0 >=20 > 5. Unmount the gluster volume >=20 > umount repair >=20 > 6. Activate the gluster domain (via engine) >=20 > The domain should become active after a while. >=20 >=20 > David: can you confirm this is the best way to reconstruct the ids > file? >=20 > Nir --=-Lc9OxRAgLPK5zoxTEAOY Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: quoted-printable <html><head></head><body><div>I cannot put the gluster domain into maintena= nce. I'm believe this is because the data center has a status of Non respon= sive (because a host cannot connect to storage or start SPM). The only opti= on available on the gluster storage is activate. I have put all the hosts i= nto maintenance. Is this enough to continue with the initialize lockspace s= tep?</div><div><br></div><div>On Fri, 2016-02-19 at 23:34 +0200, Nir Soffer= wrote:</div><blockquote type=3D"cite"><pre>On Fri, Feb 19, 2016 at 10:58 P= M, Cameron Christensen <<a href=3D"mailto:cameron.christensen@uk2group.com">cameron.christensen= @uk2group.com</a>> wrote: <blockquote type=3D"cite"> Hello, I am using glusterfs storage and ran into a split-brain issue. One of the file affected by split-brain was dom_md/ids. In attempts to fix the split-brain issue I deleted the dom_md/ids file. Is there a method to recreate or reconstruct this file? </blockquote> You can do this: 1. Put the gluster domain to maintenance (via engine) No host should access it while you reconstruct the ids file 2. Mount the gluster volume manually mkdir repair mount -t glusterfs <server>:/<path> repair/ 3. Create the file: touch repair/<sd_uuid>/dom_md/ids 4. Initialize the lockspace sanlock direct init -s <sd_uuid>:0:repair/<sd_uuid>/dom_md/ids:= 0 5. Unmount the gluster volume umount repair 6. Activate the gluster domain (via engine) The domain should become active after a while. David: can you confirm this is the best way to reconstruct the ids file? Nir </pre></blockquote></body></html> --=-Lc9OxRAgLPK5zoxTEAOY-- --=-9iJKPIn2UEV1t7IWMWhA Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQEcBAABAgAGBQJWx48dAAoJEM1PCzopIAOtRCoIAMMmTSeTYTzn6Mn9gSXuCjkI 9MYwuEVDQxyEzb0tnfz0GKOfRrL+h5ICTnk4EcXSxWj3DdD2X6EM5iYpqKkGS/wB xUwQ8XS9mZqf9LN+kriV7l8bMSLcMw4RCgl36mcrmA4HY5TYj5ZwLdj00K/fMsDk eE/7nExgnozlnmb83FqZolXkUndi9zItmB+iloVLLmkCjqK2hznrbtdH0g0f4XOm EZkdqeWSKCJ0g05brn07XqjhGA/XrVDbo4gZoGi0QzQ55nJbRsljgI37o82yFiWY /o3GQ6YhuvISu0AVeNci/4ntCslMr3wXXOY5IadYIj1HMxLDklR+R38CM5ncDdc= =8sGo -----END PGP SIGNATURE----- --=-9iJKPIn2UEV1t7IWMWhA--