
Thanks for the suggestions.
Following Maor’s suggestion I was able to add a local domain, but that required maintenance mode, so I had to failure the engine over to another host to make the change to the current host.
I like the appliance solution a little better, although I think it’s best if I were to run it under its own private KVM process unmanaged by ovirt, so that its possible to edit and cycle the host. Unfortunately it’s still a bit cumbersome as you need to have an engine appliance per system or shuffle around the image with some sort of disaster recovery plan.
I also looked into using gluster or cephfs as a way to share state, but noticed the BZs about the lack of complete atomicity leading to duplicate engines.
This is probably not the right place for dev musings, but IMO it would be great if in a future release there could be a solution that doesn’t require shared storage, which for smaller use-cases is often too pricey of a requirement. Ideally, under such a “horizontal” setup, each host could govern its own management data, and the engine could act more as an authoritative aggregator, thereby reducing the need for ha (if it fails just reinstall a clean one and let it reimport everything). It seems like most of the pieces are already there, with the per host-vdsm instance already containing much of the data. I’m guessing the missing element is having the engine support pulling that information as opposed to just pushing it. This is sort of like a capability that an unnamed proprietary competitor has, so it might have some sort of appeal. Of course such setups do have limitations, like you still need shared storage for live migrations and so on. So I certainly understand!
On 12/10/2014 05:47 PM, Jason Greene wrote: the ratio nal behind the existing design. Anyway it’s just some food for thought. before we go so far out... gluster should work with 3 hosts, we are working on improving the flow for this for 3.6. today it requires quite a few manual steps to do so.
Thanks
-Jason
On Dec 7, 2014, at 6:26 AM, Doron Fediuck <dfediuck@redhat.com> wrote:
Hi Jason, Hosted Engine was designed to work with a shared storage since all hosts need to share information on their status, and by that support high-availability for this VM.
If you do not need high-availability you can use RHEV appliance to get a VM running with the engine inside. Remember that failure of this host will kill the engine VM as well.
Doron
----- Original Message -----
From: "Maor Lipchuk" <mlipchuk@redhat.com> To: "Jason Greene" <jason.greene@redhat.com> Cc: users@ovirt.org Sent: Sunday, December 7, 2014 1:22:44 PM Subject: Re: [ovirt-users] Local storage with self-hosted mode
Hi Jason,
Did you try to create a new local Data Center, and add a local storage domain there? or it have to be on the same Data Center containing the hosted engine?
Regards, Maor
----- Original Message -----
From: "Jason Greene" <jason.greene@redhat.com> To: users@ovirt.org Sent: Friday, December 5, 2014 11:20:31 PM Subject: [ovirt-users] Local storage with self-hosted mode
Is there any way to use local storage with self-hosted mode for VMs other than the engine? The interface does not seem to allow it. I can hack in local storage on vdsm, but its not discovered/used by the engine (so i assume this is because it keeps its own metadata). I tried using a posix domain but there seems to be an expectation that the posix domain is accessible to all other hosts.
My use case is 2 physical servers with no shared storage options, and we need fast I/O since the VMs are used for CI, so local storage is the ideal setup.
-Jason _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Jason T. Greene WildFly Lead / JBoss EAP Platform Architect JBoss, a division of Red Hat
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users