Alessandro,

Right now I dont have cinder running in my setup in case if ceph don't work then I have get one vm running open stack all in one and have all these disk connect my open stack using cinder I can present storage to my ovirt.

At the same time I not getting case study for the same.

Regards
Rajat

Hi


Regards,
Rajat Patel

http://studyhat.blogspot.com
FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...



On Sun, Dec 18, 2016 at 9:17 PM, Alessandro De Salvo <Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi,
oh, so you have only 2 physical servers? I've understood they were 3! Well, in this case ceph would not work very well, too few resources and redundancy. You could try a replica 2, but it's not safe. Having a replica 3 could be forced, but you would end up with a server with 2 replicas, which is dangerous/useless.
Okay, so you use nfs as storage domain, but in your setup the HA is not guaranteed: if a physical machine goes down and it's the one where the storage domain resides you are lost. Why not using gluster instead of nfs for the ovirt disks? You can still reserve a small gluster space for the non-ceph machines (for example a cinder VM) and ceph for the rest. Where do you have your cinder running?
Cheers,

    Alessandro

Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel <rajatjpatel@gmail.com> ha scritto:

Hi Alessandro,

Right now I have 2 physical server where I have host ovirt these are HP proliant dl 380  each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. So right now I have use only one disk which 500GB of SAS for my ovirt to run on both server. rest are not in use. At present I am using NFS which coming from mapper to ovirt as storage, go forward we like to use all these disk as  hyper-converged for ovirt. RH I could see there is KB for using gluster. But we are looking for Ceph bcoz best pref romance and scale.

<Screenshot from 2016-12-18 21-03-21.png>
Regards
Rajat

Hi


Regards,
Rajat Patel

http://studyhat.blogspot.com
FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...



On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo <Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi Rajat,
sorry but I do not really have a clear picture of your actual setup, can you please explain a bit more?
In particular:

1) what to you mean by using 4TB for ovirt? In which machines and how do you make it available to ovirt?

2) how do you plan to use ceph with ovirt?

I guess we can give more help if you clarify those points.
Thanks,

   Alessandro 

Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <rajatjpatel@gmail.com> ha scritto:

Great, thanks! Alessandro ++ Yaniv ++

What I want to use around 4 TB of SAS disk for my Ovirt (which going to be RHV4.0.5 once POC get 100% successful, in fact all product will be RH )

I had done so much duckduckgo for all these solution and use lot of reference from ovirt.org & access.redhat.com for setting up a Ovirt engine and hyp.

We dont mind having more guest running and creating ceph block storage and which will be presented to ovirt as storage. Gluster is not is use right now bcoz we have DB will be running on guest.

Regard
Rajat

On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <Alessandro.DeSalvo@roma1.infn.it> wrote:
Hi,
having a 3-node ceph cluster is the bare minimum you can have to make it working, unless you want to have just a replica-2 mode, which is not safe.
It's not true that ceph is not easy to configure, you might use very easily ceph-deploy, have puppet configuring it or even run it in containers. Using docker is in fact the easiest solution, it really requires 10 minutes to make a cluster up. I've tried it both with jewel (official containers) and kraken (custom containers), and it works pretty well.
The real problem is not creating and configuring a ceph cluster, but using it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have it and it's working pretty well, but it requires some work. For your reference we have cinder running on an ovirt VM using gluster.
Cheers,

   Alessandro 

Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul@redhat.com> ha scritto:



On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel@gmail.com> wrote:
​Dear Team,

We are using Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovirt.

We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.

Waht we are done we have install ovirt hyp on these h/w and we have physical server where we are running our manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest running on ovirt and for ceph. My question you all is what I am doing is right or wrong.

I think Ceph requires a lot more resources than above. It's also a bit more challenging to configure. I would highly recommend a 3-node cluster with Gluster.
Y.
 

Regards
Rajat​


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--

Sent from my Cell Phone - excuse the typos & auto incorrect