[ovirt-users] gluster host without raid?

Jorick Astrego j.astrego at netbulae.eu
Tue Aug 19 08:52:33 UTC 2014


Hi Tibor,

It all depends on if you can risk the chance of downtime.
Buying a couple larger HDD's is probably cheaper than spending a couple 
of hours of your time on it when things break. You could have one node 
down permanently, replace it completly and heal the volume
(http://gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server)

During this time the cluster is at risk and performance will suffer 
during healing. I think if you use the same data disk healing will be 
quick though.

Kind regards,

Jorick Astrego
Netbulae B.V.

On 08/19/2014 10:09 AM, Punit Dambiwal wrote:
> Hi Tibor,
>
> In this if your host OS disk will crash your brick data also 
> inaccessible..better to have the RAID1 for OS...
>
> Thanks,
> Punit
>
>
> On Tue, Aug 19, 2014 at 2:29 PM, Demeter Tibor <tdemeter at itsmart.hu 
> <mailto:tdemeter at itsmart.hu>> wrote:
>
>     Hi,
>
>     I would like to make a four-node gluster based cluster, but I
>     don't have enought hdd. I just wondering, on the hosts (not on the
>     portal) I won't use raid for the main system (os).
>     I have disks only for bricks. I will give one disk for the os, and
>     one disk for gluster brick per server.
>
>     Is it a good idea?
>     What will happen if the main system's hdd will break?
>     Can I recreate gluster+vdsm host without data loss?
>     How did I do it?
>
>
>     Thanks for advance.
>
>     Tibor
>
>
>     _______________________________________________
>     Users mailing list
>     Users at ovirt.org <mailto:Users at ovirt.org>
>     http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140819/1b129d8f/attachment-0001.html>


More information about the Users mailing list