[Users] so, what do you want next in oVirt?
Andrew Lau
andrew at andrewklau.com
Fri Sep 6 07:19:46 EDT 2013
A lot of work.. but has there been any consideration into redesigning the
web UI? Right now it's so complex and heavy, takes a while to load and too
many features are hidden! I've been using oVirt for half a year.. and I'm
still finding small useful things hidden here and there.
In comparison I like how Openstack has a lovely interface which hides its
thousands of complexities behind a clean interface.
Just a thought..
On Fri, Sep 6, 2013 at 8:43 PM, <suporte at logicworks.pt> wrote:
> Could be great o have on the Engine:
> - An upload option for the ISO files
> - A backup and restore option
> - An high availability for the engine: install the engine on 2 platforms
> (hardware?), than integrate them for synchronization
>
> Jose
>
> ------------------------------
> *From: *"noc" <noc at nieuwland.nl>
> *Cc: *users at ovirt.org
> *Sent: *Sexta-feira, 6 de Setembro de 2013 10:28:09
>
> *Subject: *Re: [Users] so, what do you want next in oVirt?
>
> On 6-9-2013 10:12, Itamar Heim wrote:
> > On 09/05/2013 10:30 AM, noc wrote:
> >>>> On 08/21/2013 12:11 PM, Itamar Heim wrote:
> >>>>> On 08/21/2013 02:40 AM, Joop van de Wege wrote:
> >>>>>>
> >>>>>> What I would like to see in the ! next version is pxe boot of the
> >>>>>> nodes.
> >>>>>> Probably not easy to achieve because of dependency on dhcp.
> >>>>>
> >>>>> Hi Joop,
> >>>>>
> >>>>> can you please give a bit more information on the use case / how you
> >>>>> envision this?
> >>>>>
> >>>>> current thinking around bare metal provisioning of hosts is to extend
> >>>>> the functionality around the foreman provider for this, but you may
> >>>>> have other suggestions?
> >>>>
> >>>> I think Joop means to be able to add hosts (nodes) to a cluster by
> >>>> adding their MAC address to the dhcp list for PXE boot into ovirt-node
> >>>> and thus join the cluster. This would make it easy to add new physical
> >>>> nodes without any spinning disks or other local storage requirements.
> >>>
> >>> we started adding foreman integration in 3.3:
> >>> http://www.ovirt.org/Features/ForemanIntegration
> >>>
> >>> adding ohad and oved for their thoughts on this.
> >>>
> >>>>
> >>>> I suppose this may not be easy with complex network connections (bonds
> >>>> on mgmt network, mgmt network on a tagged vlan, etc), but it should be
> >>>> possible if the management network interface is plain and physical.
> >>>>
> >>>> /Simon
> >>>>
> >>>> PS, Perhaps Joop can confirm this idea, we've talked about it IRL.
> >>>> _______________________________________________
> >>>> Users mailing list
> >>>> Users at ovirt.org
> >>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>
> >> This isn't about provisioning with Foreman. Its about having the compute
> >> nodes NOT having any spinning disks. So the only way to start a node is
> >> to pxeboot it and then let it (re)connect with the engine. Then it will
> >> be identified by engine as either a new node or a reconnecting node and
> >> it will get its configuration from the engine. For reference: thats how
> >> VirtualIron works. It has a managment network, just like ovirt, and on
> >> that it runs a tftp and dhcp server. Nodes are plugged into the
> >> managment network, without disk, and then pxe booted after which they
> >> appear in the webui as new unconfigured nodes. You then can set various
> >> settings and upon rebooting the nodes will recieve these settings
> >> because it is recognised by its mac address. The advantage of this
> >> construct is that you can place a new server into a rack, cable it,
> >> power on and go back to you office where you'll find the new node
> >> waiting to be configured. No messing around with CDs to install an OS,
> >> not being in the datacenter for hours on end, just in and out.
> >>
> >> Yes, disks are cheap but they brake down, need maintenance, means
> >> downtime and in general more admin time then when you don't have them. (
> >> its a shame to have a raid1 of 2 1Tb disk just to install an OS of less
> >> then 10G)
> >
> > just wondering, how do they prevent a rogue node/guest from
> > masquerading as such a host, getting access/secrets/VMs to be launched
> > on such an untrusted node (they could easily report a different mac
> > address if the layer 2 isn't hardened against that)?
> >
> They would need physical access to your rack which ofcourse is locked,
> you would need to powerdown/up which would trigger an alert, switch port
> down/up would trigger an alert, so probably you're notified that
> something not quite right is happening. I haven't gone through the
> source to see if there is more then just the mac address check.
>
> > other than that, yes. we actually used to have this via the
> > AutoApprovePatterns config option, which would have the engine approve
> > a pending node as it registers (I admit i don't think anyone used this
> > last several years, and it may be totally broken by now).
> >
> > please note this doesn't solve the need for a disk, just the
> > auto-registration part (if it still works)
> What I would like is to have the ovirt Node pxe booting and getting its
> config from engine or autoregister. I know there is a script which
> converts the iso into a huge pxeboot kernel but don't know how to solve
> or if its solved the config part.
>
> @karli:
> If you run your cluster in Memory Optimization=None then you won't need
> swap. Have been doing that for years and haven't had a single problem
> attributed to that. I just would like to have the choice, pxe boot the
> node and know that you don't have swap. Run with disks if you really
> need overprovisioning.
>
> Regards,
>
> Joop
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130906/cb0d2225/attachment.html>
More information about the Users
mailing list