How to change glusterfs volume type for MASTER storage domain

This is a multi-part message in MIME format. --------------000007040602050601000400 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Hi, Our ovirt test instance has 4 nodes with glusterfs master storage domain. I had the volume set to distributed replicated gluster and things worked. Due to quorum issues, I'm trying to switch it to replicated with replica count 4. Currenlty the nodes keep rebooting while the master storage is down and I can't get things up again. Invalid status on Data Center Default. Setting status to Non Responsive. Host node3 cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center Default. Setting Host state to Non-Operational. Is there a known way to change the glusterfs volume for the master domain that doesn't end in a total system faillure? Kind regards, Jorick Astrego Netbulae --------------000007040602050601000400 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> </head> <body bgcolor="#FFFFFF" text="#000000"> Hi,<br> <br> Our ovirt test instance has 4 nodes with glusterfs master storage domain. I had the volume set to distributed replicated gluster and things worked. <br> <br> Due to quorum issues, I'm trying to switch it to replicated with replica count 4.<br> <br> Currenlty the nodes keep rebooting while the master storage is down and I can't get things up again.<br> <br> <blockquote> <div title="" tabindex="0" style="outline-style: none;" __gwt_cell="cell-gwt-uid-265430"> <div class="" id="gwt-uid-525_col2_row4">Invalid status on Data Center Default. Setting status to Non Responsive.<br> Host node3 cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center Default. Setting Host state to Non-Operational.<br> </div> </div> </blockquote> Is there a known way to change the glusterfs volume for the master domain that doesn't end in a total system faillure?<br> <br> Kind regards,<br> <br> Jorick Astrego<br> Netbulae<br> </body> </html> --------------000007040602050601000400--

Hi, =20 Our ovirt test instance has 4 nodes with glusterfs master storage = domain. I had the volume set to distributed replicated gluster and =
=20 Due to quorum issues, I'm trying to switch it to replicated with = replica count 4. =20 Currenlty the nodes keep rebooting while the master storage is down = and I can't get things up again. =20 Invalid status on Data Center Default. Setting status to Non = Responsive. Host node3 cannot access the Storage Domain(s) <UNKNOWN> attached to =
--Apple-Mail=_30BAD42D-0062-4B8B-83AD-BDBA78611705 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 In a similar situation, I=92ve had some luck creating a second storage = domain & allowing the master role to be moved to it. You don=92t even = need to have any VMs on it, just have it exist and add it to the engine. = That way ovirt doesn=92t freak out and reboot your nodes as much while = you clean up your main storage domain. -Darrell On Sep 8, 2014, at 6:27 AM, Jorick Astrego <j.astrego@netbulae.eu> = wrote: things worked.=20 the Data Center Default. Setting Host state to Non-Operational.
Is there a known way to change the glusterfs volume for the master = domain that doesn't end in a total system faillure? =20 Kind regards, =20 Jorick Astrego Netbulae _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_30BAD42D-0062-4B8B-83AD-BDBA78611705 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">In a = similar situation, I=92ve had some luck creating a second storage domain = & allowing the master role to be moved to it. You don=92t even need = to have any VMs on it, just have it exist and add it to the engine. That = way ovirt doesn=92t freak out and reboot your nodes as much while you = clean up your main storage domain.<div><br></div><div> = -Darrell</div><div><br><div><div>On Sep 8, 2014, at 6:27 AM, Jorick = Astrego <<a = href=3D"mailto:j.astrego@netbulae.eu">j.astrego@netbulae.eu</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><blockquote = type=3D"cite"> =20 <meta http-equiv=3D"content-type" content=3D"text/html; = charset=3DISO-8859-1"> =20 <div bgcolor=3D"#FFFFFF" text=3D"#000000"> Hi,<br> <br> Our ovirt test instance has 4 nodes with glusterfs master storage domain. I had the volume set to distributed replicated gluster and things worked. <br> <br> Due to quorum issues, I'm trying to switch it to replicated with replica count 4.<br> <br> Currenlty the nodes keep rebooting while the master storage is down and I can't get things up again.<br> <br> <blockquote> <div title=3D"" tabindex=3D"0" style=3D"outline-style: none;" = __gwt_cell=3D"cell-gwt-uid-265430"> <div class=3D"" id=3D"gwt-uid-525_col2_row4">Invalid status on = Data Center Default. Setting status to Non Responsive.<br> Host node3 cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center Default. Setting Host state to Non-Operational.<br> </div> </div> </blockquote> Is there a known way to change the glusterfs volume for the master domain that doesn't end in a total system faillure?<br> <br> Kind regards,<br> <br> Jorick Astrego<br> Netbulae<br> </div> _______________________________________________<br>Users mailing = list<br><a = href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br>http://lists.ovirt.= org/mailman/listinfo/users<br></blockquote></div><br></div></body></html>= --Apple-Mail=_30BAD42D-0062-4B8B-83AD-BDBA78611705--

This is a multi-part message in MIME format. --------------090508020006090500010909 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Hi Darell, Thanks for that. That will work fine for what I'm trying to do. Kind regards, Jorick Astrego Netbulae On 09/08/2014 05:14 PM, Darrell Budic wrote:
In a similar situation, Ive had some luck creating a second storage domain & allowing the master role to be moved to it. You dont even need to have any VMs on it, just have it exist and add it to the engine. That way ovirt doesnt freak out and reboot your nodes as much while you clean up your main storage domain.
-Darrell
On Sep 8, 2014, at 6:27 AM, Jorick Astrego <j.astrego@netbulae.eu <mailto:j.astrego@netbulae.eu>> wrote:
Hi,
Our ovirt test instance has 4 nodes with glusterfs master storage domain. I had the volume set to distributed replicated gluster and things worked.
Due to quorum issues, I'm trying to switch it to replicated with replica count 4.
Currenlty the nodes keep rebooting while the master storage is down and I can't get things up again.
Invalid status on Data Center Default. Setting status to Non Responsive. Host node3 cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center Default. Setting Host state to Non-Operational.
Is there a known way to change the glusterfs volume for the master domain that doesn't end in a total system faillure?
Kind regards,
Jorick Astrego Netbulae _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
--------------090508020006090500010909 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Hi Darell,<br> <br> Thanks for that. That will work fine for what I'm trying to do.<br> <br> Kind regards,<br> <br> Jorick Astrego<br> Netbulae<br> <br> <div class="moz-cite-prefix">On 09/08/2014 05:14 PM, Darrell Budic wrote:<br> </div> <blockquote cite="mid:3B46CDAC-0C76-4BA8-B874-17202B0DAE5A@onholyground.com" type="cite"> <meta http-equiv="Content-Type" content="text/html; charset=windows-1252"> In a similar situation, Ive had some luck creating a second storage domain & allowing the master role to be moved to it. You dont even need to have any VMs on it, just have it exist and add it to the engine. That way ovirt doesnt freak out and reboot your nodes as much while you clean up your main storage domain. <div><br> </div> <div> -Darrell</div> <div><br> <div> <div>On Sep 8, 2014, at 6:27 AM, Jorick Astrego <<a moz-do-not-send="true" href="mailto:j.astrego@netbulae.eu">j.astrego@netbulae.eu</a>> wrote:</div> <br class="Apple-interchange-newline"> <blockquote type="cite"> <meta http-equiv="content-type" content="text/html; charset=windows-1252"> <div bgcolor="#FFFFFF" text="#000000"> Hi,<br> <br> Our ovirt test instance has 4 nodes with glusterfs master storage domain. I had the volume set to distributed replicated gluster and things worked. <br> <br> Due to quorum issues, I'm trying to switch it to replicated with replica count 4.<br> <br> Currenlty the nodes keep rebooting while the master storage is down and I can't get things up again.<br> <br> <blockquote> <div title="" tabindex="0" style="outline-style: none;" __gwt_cell="cell-gwt-uid-265430"> <div class="" id="gwt-uid-525_col2_row4">Invalid status on Data Center Default. Setting status to Non Responsive.<br> Host node3 cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center Default. Setting Host state to Non-Operational.<br> </div> </div> </blockquote> Is there a known way to change the glusterfs volume for the master domain that doesn't end in a total system faillure?<br> <br> Kind regards,<br> <br> Jorick Astrego<br> Netbulae<br> </div> _______________________________________________<br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------090508020006090500010909--
participants (2)
-
Darrell Budic
-
Jorick Astrego