
------=_NextPart_000_00FD_01CF22B5.E5FD9560 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit I currently have a new setup running ovirt 3.3.3. I have a Gluster storage domain with roughly 2.5TB of usable space. Gluster is installed on the same systems as the ovirt hosts. The host break down is as follows Ovirt DC: 4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each disk is 500GB. With the OS installed and configured I end up with 1.2TB of usable space left for my data volume Gluster volume: 4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me with about 2.5TB in the storage domain) Does this setup give me enough fault tolerance to survive losing a host and have my HA vm automatically move to an available host and keep running?? ------=_NextPart_000_00FD_01CF22B5.E5FD9560 Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta = http-equiv=3DContent-Type content=3D"text/html; = charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 = (filtered medium)"><style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri","sans-serif";} a:link, span.MsoHyperlink {mso-style-priority:99; color:#0563C1; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:#954F72; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal-compose; font-family:"Calibri","sans-serif"; color:windowtext;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri","sans-serif";} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body lang=3DEN-US = link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p = class=3DMsoNormal>I currently have a new setup running ovirt 3.3.3. I = have a Gluster storage domain with roughly 2.5TB of usable space. = Gluster is installed on the same systems as the ovirt hosts. The = host break down is as follows<o:p></o:p></p><p = class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Ovirt = DC:<o:p></o:p></p><p class=3DMsoNormal>4 hosts in the cluster. Each host = has 4 physical disks in a RAID 5. Each disk is 500GB. With the OS = installed and configured I end up with 1.2TB of usable space left for my = data volume<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p = class=3DMsoNormal>Gluster volume:<o:p></o:p></p><p class=3DMsoNormal>4 = bricks with 1.2TB of space per brick (Distribute Replicate leaves me = with about 2.5TB in the storage domain)<o:p></o:p></p><p = class=3DMsoNormal><o:p> </o:p></p><p = class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Does this = setup give me enough fault tolerance to survive losing a host and have = my HA vm automatically move to an available host and keep = running??<o:p></o:p></p><p = class=3DMsoNormal><o:p> </o:p></p></div></body></html> ------=_NextPart_000_00FD_01CF22B5.E5FD9560--

There was another recent post about this but a sum up was: You must have power fencing to support VM HA otherwise they'll be an issue with the engine not knowing whether the VM is still running and not bring it up on a new host to avoid data corruption. Also make sure you have your quorum setup properly based on your replication scenario so you can withstand 1 host being lost. I don't believe they'll "keep running" in a sense because of the host being lost, but they would restart on another host. At least that's what I've noticed in my case. On Thu, Feb 6, 2014 at 1:04 PM, Maurice James <midnightsteel@msn.com> wrote:
I currently have a new setup running ovirt 3.3.3. I have a Gluster storage domain with roughly 2.5TB of usable space. Gluster is installed on the same systems as the ovirt hosts. The host break down is as follows
Ovirt DC:
4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each disk is 500GB. With the OS installed and configured I end up with 1.2TB of usable space left for my data volume
Gluster volume:
4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me with about 2.5TB in the storage domain)
Does this setup give me enough fault tolerance to survive losing a host and have my HA vm automatically move to an available host and keep running??
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

</b><span =
------=_NextPart_000_011F_01CF22B8.338C6700 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Hmm. So in that case would I be able to drop the Gluster setup and use NFS each host and make sure power fencing is enabled? Will that still achieve fault tolerance, or is a replicated gluster still required? From: Andrew Lau [mailto:andrew@andrewklau.com] Sent: Wednesday, February 05, 2014 9:17 PM To: Maurice James Cc: users Subject: Re: [Users] Gluster question There was another recent post about this but a sum up was: You must have power fencing to support VM HA otherwise they'll be an issue with the engine not knowing whether the VM is still running and not bring it up on a new host to avoid data corruption. Also make sure you have your quorum setup properly based on your replication scenario so you can withstand 1 host being lost. I don't believe they'll "keep running" in a sense because of the host being lost, but they would restart on another host. At least that's what I've noticed in my case. On Thu, Feb 6, 2014 at 1:04 PM, Maurice James <midnightsteel@msn.com <mailto:midnightsteel@msn.com> > wrote: I currently have a new setup running ovirt 3.3.3. I have a Gluster storage domain with roughly 2.5TB of usable space. Gluster is installed on the same systems as the ovirt hosts. The host break down is as follows Ovirt DC: 4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each disk is 500GB. With the OS installed and configured I end up with 1.2TB of usable space left for my data volume Gluster volume: 4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me with about 2.5TB in the storage domain) Does this setup give me enough fault tolerance to survive losing a host and have my HA vm automatically move to an available host and keep running?? _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users ------=_NextPart_000_011F_01CF22B8.338C6700 Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta = http-equiv=3DContent-Type content=3D"text/html; = charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 = (filtered medium)"><style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} @font-face {font-family:Tahoma; panose-1:2 11 6 4 3 5 4 4 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman","serif";} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal-reply; font-family:"Calibri","sans-serif"; color:#1F497D;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri","sans-serif";} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue = vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Hmm. So in that case would I be able to drop the Gluster setup and = use NFS each host and make sure power fencing is enabled? Will that = still achieve fault tolerance, or is a replicated gluster still = required?<o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><b><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif"'>From:</span= style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif"'> Andrew = Lau [mailto:andrew@andrewklau.com] <br><b>Sent:</b> Wednesday, February = 05, 2014 9:17 PM<br><b>To:</b> Maurice James<br><b>Cc:</b> = users<br><b>Subject:</b> Re: [Users] Gluster = question<o:p></o:p></span></p><p = class=3DMsoNormal><o:p> </o:p></p><div><div><p = class=3DMsoNormal><span = style=3D'font-family:"Tahoma","sans-serif"'>There was another recent = post about this but a sum up was:<o:p></o:p></span></p></div><div><p = class=3DMsoNormal><span = style=3D'font-family:"Tahoma","sans-serif"'><o:p> </o:p></span></p><= /div><div><p class=3DMsoNormal><span = style=3D'font-family:"Tahoma","sans-serif"'>You must have power fencing = to support VM HA otherwise they'll be an issue with the engine not = knowing whether the VM is still running and not bring it up on a new = host to avoid data corruption. Also make sure you have your quorum setup = properly based on your replication scenario so you can withstand 1 host = being lost.<o:p></o:p></span></p></div><div><p class=3DMsoNormal><span = style=3D'font-family:"Tahoma","sans-serif"'><o:p> </o:p></span></p><= /div><div><p class=3DMsoNormal><span = style=3D'font-family:"Tahoma","sans-serif"'>I don't believe they'll = "keep running" in a sense because of the host being lost, but = they would restart on another host. At least that's what I've noticed in = my case.<o:p></o:p></span></p></div><div><p class=3DMsoNormal = style=3D'margin-bottom:12.0pt'><o:p> </o:p></p><div><p = class=3DMsoNormal>On Thu, Feb 6, 2014 at 1:04 PM, Maurice James <<a = href=3D"mailto:midnightsteel@msn.com" = target=3D"_blank">midnightsteel@msn.com</a>> = wrote:<o:p></o:p></p><blockquote style=3D'border:none;border-left:solid = #CCCCCC 1.0pt;padding:0in 0in 0in = 6.0pt;margin-left:4.8pt;margin-right:0in'><div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>I currently = have a new setup running ovirt 3.3.3. I have a Gluster storage domain = with roughly 2.5TB of usable space. Gluster is installed on the = same systems as the ovirt hosts. The host break down is as = follows<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Ovirt = DC:<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>4 hosts in = the cluster. Each host has 4 physical disks in a RAID 5. Each disk is = 500GB. With the OS installed and configured I end up with 1.2TB of = usable space left for my data volume<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Gluster = volume:<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>4 bricks = with 1.2TB of space per brick (Distribute Replicate leaves me with about = 2.5TB in the storage domain)<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Does this = setup give me enough fault tolerance to survive losing a host and have = my HA vm automatically move to an available host and keep = running??<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p></div></div><p class=3DMsoNormal = style=3D'margin-bottom:12.0pt'><br>______________________________________= _________<br>Users mailing list<br><a = href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br><a = href=3D"http://lists.ovirt.org/mailman/listinfo/users" = target=3D"_blank">http://lists.ovirt.org/mailman/listinfo/users</a><o:p><= /o:p></p></blockquote></div><p = class=3DMsoNormal><o:p> </o:p></p></div></div></div></body></html> ------=_NextPart_000_011F_01CF22B8.338C6700--

I'm not sure what you mean by NFS each host, but you'll need some way to at least ensure the data is available. Be that be replicated gluster or a centralized SAN etc. On Thu, Feb 6, 2014 at 1:21 PM, Maurice James <midnightsteel@msn.com> wrote:
Hmm. So in that case would I be able to drop the Gluster setup and use NFS each host and make sure power fencing is enabled? Will that still achieve fault tolerance, or is a replicated gluster still required?
*From:* Andrew Lau [mailto:andrew@andrewklau.com] *Sent:* Wednesday, February 05, 2014 9:17 PM *To:* Maurice James *Cc:* users *Subject:* Re: [Users] Gluster question
There was another recent post about this but a sum up was:
You must have power fencing to support VM HA otherwise they'll be an issue with the engine not knowing whether the VM is still running and not bring it up on a new host to avoid data corruption. Also make sure you have your quorum setup properly based on your replication scenario so you can withstand 1 host being lost.
I don't believe they'll "keep running" in a sense because of the host being lost, but they would restart on another host. At least that's what I've noticed in my case.
On Thu, Feb 6, 2014 at 1:04 PM, Maurice James <midnightsteel@msn.com> wrote:
I currently have a new setup running ovirt 3.3.3. I have a Gluster storage domain with roughly 2.5TB of usable space. Gluster is installed on the same systems as the ovirt hosts. The host break down is as follows
Ovirt DC:
4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each disk is 500GB. With the OS installed and configured I end up with 1.2TB of usable space left for my data volume
Gluster volume:
4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me with about 2.5TB in the storage domain)
Does this setup give me enough fault tolerance to survive losing a host and have my HA vm automatically move to an available host and keep running??
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

</b><span =
</b><span =
------=_NextPart_000_012D_01CF22B9.5413C670 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit OK I think I got it now. What I meant by NFS on each host was. In ovirt you can set up an NFS storage domain on each host and have the available to all hosts in the cluster From: Andrew Lau [mailto:andrew@andrewklau.com] Sent: Wednesday, February 05, 2014 9:25 PM To: Maurice James Cc: users Subject: Re: [Users] Gluster question I'm not sure what you mean by NFS each host, but you'll need some way to at least ensure the data is available. Be that be replicated gluster or a centralized SAN etc. On Thu, Feb 6, 2014 at 1:21 PM, Maurice James <midnightsteel@msn.com <mailto:midnightsteel@msn.com> > wrote: Hmm. So in that case would I be able to drop the Gluster setup and use NFS each host and make sure power fencing is enabled? Will that still achieve fault tolerance, or is a replicated gluster still required? From: Andrew Lau [mailto:andrew@andrewklau.com <mailto:andrew@andrewklau.com> ] Sent: Wednesday, February 05, 2014 9:17 PM To: Maurice James Cc: users Subject: Re: [Users] Gluster question There was another recent post about this but a sum up was: You must have power fencing to support VM HA otherwise they'll be an issue with the engine not knowing whether the VM is still running and not bring it up on a new host to avoid data corruption. Also make sure you have your quorum setup properly based on your replication scenario so you can withstand 1 host being lost. I don't believe they'll "keep running" in a sense because of the host being lost, but they would restart on another host. At least that's what I've noticed in my case. On Thu, Feb 6, 2014 at 1:04 PM, Maurice James <midnightsteel@msn.com <mailto:midnightsteel@msn.com> > wrote: I currently have a new setup running ovirt 3.3.3. I have a Gluster storage domain with roughly 2.5TB of usable space. Gluster is installed on the same systems as the ovirt hosts. The host break down is as follows Ovirt DC: 4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each disk is 500GB. With the OS installed and configured I end up with 1.2TB of usable space left for my data volume Gluster volume: 4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me with about 2.5TB in the storage domain) Does this setup give me enough fault tolerance to survive losing a host and have my HA vm automatically move to an available host and keep running?? _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users ------=_NextPart_000_012D_01CF22B9.5413C670 Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta = http-equiv=3DContent-Type content=3D"text/html; = charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 = (filtered medium)"><style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} @font-face {font-family:Tahoma; panose-1:2 11 6 4 3 5 4 4 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman","serif";} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal-reply; font-family:"Calibri","sans-serif"; color:#1F497D;} .MsoChpDefault {mso-style-type:export-only; font-family:"Calibri","sans-serif";} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue = vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>OK I think I got it now. What I meant by NFS on each host was. In = ovirt you can set up an NFS storage domain on each host and have the = available to all hosts in the cluster<o:p></o:p></span></p><p = class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'><o:p> </o:p></span></p><p class=3DMsoNormal><b><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif"'>From:</span= style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif"'> Andrew = Lau [mailto:andrew@andrewklau.com] <br><b>Sent:</b> Wednesday, February = 05, 2014 9:25 PM<br><b>To:</b> Maurice James<br><b>Cc:</b> = users<br><b>Subject:</b> Re: [Users] Gluster = question<o:p></o:p></span></p><p = class=3DMsoNormal><o:p> </o:p></p><div><div><p = class=3DMsoNormal><span style=3D'font-family:"Tahoma","sans-serif"'>I'm = not sure what you mean by NFS each host, but you'll need some way to at = least ensure the data is available. Be that be replicated gluster or a = centralized SAN etc.<o:p></o:p></span></p></div><div><p = class=3DMsoNormal><o:p> </o:p></p><div><p class=3DMsoNormal>On Thu, = Feb 6, 2014 at 1:21 PM, Maurice James <<a = href=3D"mailto:midnightsteel@msn.com" = target=3D"_blank">midnightsteel@msn.com</a>> = wrote:<o:p></o:p></p><blockquote style=3D'border:none;border-left:solid = #CCCCCC 1.0pt;padding:0in 0in 0in = 6.0pt;margin-left:4.8pt;margin-right:0in'><div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'>Hmm. So in that case would I be able to drop the Gluster setup and = use NFS each host and make sure power fencing is enabled? Will that = still achieve fault tolerance, or is a replicated gluster still = required?</span><o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497= D'> </span><o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><b><span = style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif"'>From:</span= style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif"'> Andrew = Lau [mailto:<a href=3D"mailto:andrew@andrewklau.com" = target=3D"_blank">andrew@andrewklau.com</a>] <br><b>Sent:</b> Wednesday, = February 05, 2014 9:17 PM<br><b>To:</b> Maurice James<br><b>Cc:</b> = users<br><b>Subject:</b> Re: [Users] Gluster = question</span><o:p></o:p></p><div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p><div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-family:"Tahoma","sans-serif"'>There was another recent = post about this but a sum up was:</span><o:p></o:p></p></div><div><p = class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-family:"Tahoma","sans-serif"'> </span><o:p></o:p></p><= /div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-family:"Tahoma","sans-serif"'>You must have power fencing = to support VM HA otherwise they'll be an issue with the engine not = knowing whether the VM is still running and not bring it up on a new = host to avoid data corruption. Also make sure you have your quorum setup = properly based on your replication scenario so you can withstand 1 host = being lost.</span><o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-family:"Tahoma","sans-serif"'> </span><o:p></o:p></p><= /div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span = style=3D'font-family:"Tahoma","sans-serif"'>I don't believe they'll = "keep running" in a sense because of the host being lost, but = they would restart on another host. At least that's what I've noticed in = my case.</span><o:p></o:p></p></div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;margin-bottom:12.0pt'> <o:p></o:p><= /p><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>On Thu, Feb = 6, 2014 at 1:04 PM, Maurice James <<a = href=3D"mailto:midnightsteel@msn.com" = target=3D"_blank">midnightsteel@msn.com</a>> = wrote:<o:p></o:p></p><blockquote style=3D'border:none;border-left:solid = #CCCCCC 1.0pt;padding:0in 0in 0in = 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0in;margin-bottom:5= .0pt'><div><div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>I currently = have a new setup running ovirt 3.3.3. I have a Gluster storage domain = with roughly 2.5TB of usable space. Gluster is installed on the = same systems as the ovirt hosts. The host break down is as = follows<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Ovirt = DC:<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>4 hosts in = the cluster. Each host has 4 physical disks in a RAID 5. Each disk is = 500GB. With the OS installed and configured I end up with 1.2TB of = usable space left for my data volume<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Gluster = volume:<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>4 bricks = with 1.2TB of space per brick (Distribute Replicate leaves me with about = 2.5TB in the storage domain)<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Does this = setup give me enough fault tolerance to survive losing a host and have = my HA vm automatically move to an available host and keep = running??<o:p></o:p></p><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p></div></div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;margin-bottom:12.0pt'><br>______________= _________________________________<br>Users mailing list<br><a = href=3D"mailto:Users@ovirt.org" = target=3D"_blank">Users@ovirt.org</a><br><a = href=3D"http://lists.ovirt.org/mailman/listinfo/users" = target=3D"_blank">http://lists.ovirt.org/mailman/listinfo/users</a><o:p><= /o:p></p></blockquote></div><p class=3DMsoNormal = style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'> <o:p><= /o:p></p></div></div></div></div></div></div></blockquote></div><p = class=3DMsoNormal><o:p> </o:p></p></div></div></div></body></html> ------=_NextPart_000_012D_01CF22B9.5413C670--

On 02/06/2014 04:29 AM, Maurice James wrote:
OK I think I got it now. What I meant by NFS on each host was. In ovirt you can set up an NFS storage domain on each host and have the available to all hosts in the cluster
but then you have no replication/redudnacy in case a host fails, while with gluster with replication you don't care (well less) if a host fails?
*From:*Andrew Lau [mailto:andrew@andrewklau.com] *Sent:* Wednesday, February 05, 2014 9:25 PM *To:* Maurice James *Cc:* users *Subject:* Re: [Users] Gluster question
I'm not sure what you mean by NFS each host, but you'll need some way to at least ensure the data is available. Be that be replicated gluster or a centralized SAN etc.
On Thu, Feb 6, 2014 at 1:21 PM, Maurice James <midnightsteel@msn.com <mailto:midnightsteel@msn.com>> wrote:
Hmm. So in that case would I be able to drop the Gluster setup and use NFS each host and make sure power fencing is enabled? Will that still achieve fault tolerance, or is a replicated gluster still required?
*From:*Andrew Lau [mailto:andrew@andrewklau.com <mailto:andrew@andrewklau.com>] *Sent:* Wednesday, February 05, 2014 9:17 PM *To:* Maurice James *Cc:* users *Subject:* Re: [Users] Gluster question
There was another recent post about this but a sum up was:
You must have power fencing to support VM HA otherwise they'll be an issue with the engine not knowing whether the VM is still running and not bring it up on a new host to avoid data corruption. Also make sure you have your quorum setup properly based on your replication scenario so you can withstand 1 host being lost.
I don't believe they'll "keep running" in a sense because of the host being lost, but they would restart on another host. At least that's what I've noticed in my case.
On Thu, Feb 6, 2014 at 1:04 PM, Maurice James <midnightsteel@msn.com <mailto:midnightsteel@msn.com>> wrote:
I currently have a new setup running ovirt 3.3.3. I have a Gluster storage domain with roughly 2.5TB of usable space. Gluster is installed on the same systems as the ovirt hosts. The host break down is as follows
Ovirt DC:
4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each disk is 500GB. With the OS installed and configured I end up with 1.2TB of usable space left for my data volume
Gluster volume:
4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me with about 2.5TB in the storage domain)
Does this setup give me enough fault tolerance to survive losing a host and have my HA vm automatically move to an available host and keep running??
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Thanks -----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Sunday, February 09, 2014 5:13 PM To: Maurice James; 'Andrew Lau' Cc: 'users' Subject: Re: [Users] Gluster question On 02/06/2014 04:29 AM, Maurice James wrote:
OK I think I got it now. What I meant by NFS on each host was. In ovirt you can set up an NFS storage domain on each host and have the available to all hosts in the cluster
but then you have no replication/redudnacy in case a host fails, while with gluster with replication you don't care (well less) if a host fails?
*From:*Andrew Lau [mailto:andrew@andrewklau.com] *Sent:* Wednesday, February 05, 2014 9:25 PM *To:* Maurice James *Cc:* users *Subject:* Re: [Users] Gluster question
I'm not sure what you mean by NFS each host, but you'll need some way to at least ensure the data is available. Be that be replicated gluster or a centralized SAN etc.
On Thu, Feb 6, 2014 at 1:21 PM, Maurice James <midnightsteel@msn.com <mailto:midnightsteel@msn.com>> wrote:
Hmm. So in that case would I be able to drop the Gluster setup and use NFS each host and make sure power fencing is enabled? Will that still achieve fault tolerance, or is a replicated gluster still required?
*From:*Andrew Lau [mailto:andrew@andrewklau.com <mailto:andrew@andrewklau.com>] *Sent:* Wednesday, February 05, 2014 9:17 PM *To:* Maurice James *Cc:* users *Subject:* Re: [Users] Gluster question
There was another recent post about this but a sum up was:
You must have power fencing to support VM HA otherwise they'll be an issue with the engine not knowing whether the VM is still running and not bring it up on a new host to avoid data corruption. Also make sure you have your quorum setup properly based on your replication scenario so you can withstand 1 host being lost.
I don't believe they'll "keep running" in a sense because of the host being lost, but they would restart on another host. At least that's what I've noticed in my case.
On Thu, Feb 6, 2014 at 1:04 PM, Maurice James <midnightsteel@msn.com <mailto:midnightsteel@msn.com>> wrote:
I currently have a new setup running ovirt 3.3.3. I have a Gluster storage domain with roughly 2.5TB of usable space. Gluster is installed on the same systems as the ovirt hosts. The host break down is as follows
Ovirt DC:
4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each disk is 500GB. With the OS installed and configured I end up with 1.2TB of usable space left for my data volume
Gluster volume:
4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me with about 2.5TB in the storage domain)
Does this setup give me enough fault tolerance to survive losing a host and have my HA vm automatically move to an available host and keep running??
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, I think your problem is the following: if you put your shared storage on the same servers where the HA-VMs run and you lose one of those servers, you inherently also lose the disk space on this server. Gluster can, in theory, circumvent this. But afaik (read on this very ml) it is not supported to run gluster on the same nodes as you run the vms. HTH -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
participants (4)
-
Andrew Lau
-
Itamar Heim
-
Maurice James
-
Sven Kieske