[ovirt-users] Ovirt & Ceph

rajatjpatel rajatjpatel at gmail.com
Sun Dec 18 16:35:47 UTC 2016


In fact after reading lot of KB I was thing to run one all in one open
stack and use cinder as block storage.

Ragards
Rajat

On Sun, Dec 18, 2016 at 8:33 PM rajatjpatel <rajatjpatel at gmail.com> wrote:

> Great, thanks! Alessandro ++ Yaniv ++
>
> What I want to use around 4 TB of SAS disk for my Ovirt (which going to be
> RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
>
> I had done so much duckduckgo for all these solution and use lot of
> reference from ovirt.org & access.redhat.com for setting up a Ovirt
> engine and hyp.
>
> We dont mind having more guest running and creating ceph block storage and
> which will be presented to ovirt as storage. Gluster is not is use right
> now bcoz we have DB will be running on guest.
>
> Regard
> Rajat
>
> On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <
> Alessandro.DeSalvo at roma1.infn.it> wrote:
>
> Hi,
> having a 3-node ceph cluster is the bare minimum you can have to make it
> working, unless you want to have just a replica-2 mode, which is not safe.
> It's not true that ceph is not easy to configure, you might use very
> easily ceph-deploy, have puppet configuring it or even run it in
> containers. Using docker is in fact the easiest solution, it really
> requires 10 minutes to make a cluster up. I've tried it both with jewel
> (official containers) and kraken (custom containers), and it works pretty
> well.
> The real problem is not creating and configuring a ceph cluster, but using
> it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We
> have it and it's working pretty well, but it requires some work. For your
> reference we have cinder running on an ovirt VM using gluster.
> Cheers,
>
>    Alessandro
>
> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul at redhat.com> ha
> scritto:
>
>
>
> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel at gmail.com>
> wrote:
>
> ​Dear Team,
>
> We are using Ovirt 4.0 for POC what we are doing I want to check with all
> Guru's Ovirt.
>
> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB
> SSD.
>
> Waht we are done we have install ovirt hyp on these h/w and we have
> physical server where we are running our manager for ovirt. For ovirt hyp
> we are using only one 500GB of one HDD rest we have kept for ceph, so we
> have 3 node as guest running on ovirt and for ceph. My question you all is
> what I am doing is right or wrong.
>
>
> I think Ceph requires a lot more resources than above. It's also a bit
> more challenging to configure. I would highly recommend a 3-node cluster
> with Gluster.
> Y.
>
>
>
> Regards
> Rajat​
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
> --
>
> Sent from my Cell Phone - excuse the typos & auto incorrect
>
-- 

Sent from my Cell Phone - excuse the typos & auto incorrect
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161218/65334c80/attachment.html>


More information about the Users mailing list