<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: arial,helvetica,sans-serif; font-size: 10pt; color: #000000'>Hi Doron,<br><br>But first you have to install the engine, before the VM. So, the idea is to make a backup and restore it to a VM?<br><br><hr id="zwchr"><div style="color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Doron Fediuck" <dfediuck@redhat.com><br><b>To: </b>suporte@logicworks.pt<br><b>Cc: </b>users@ovirt.org<br><b>Sent: </b>Domingo, 8 de Setembro de 2013 23:06:20<br><b>Subject: </b>Re: [Users] so, what do you want next in oVirt?<br><br>Hi Jose,<br>the latter is available by hosted engine, which is a highly<br>available VM which will be migrated / restarted on a different<br>host if something goes wrong.<br><br>----- Original Message -----<br>> From: suporte@logicworks.pt<br>> To: users@ovirt.org<br>> Sent: Friday, September 6, 2013 1:43:04 PM<br>> Subject: Re: [Users] so, what do you want next in oVirt?<br>> <br>> Could be great o have on the Engine:<br>> - An upload option for the ISO files<br>> - A backup and restore option<br>> - An high availability for the engine: install the engine on 2 platforms<br>> (hardware?), than integrate them for synchronization<br>> <br>> Jose<br>> <br>> <br>> From: "noc" <noc@nieuwland.nl><br>> Cc: users@ovirt.org<br>> Sent: Sexta-feira, 6 de Setembro de 2013 10:28:09<br>> Subject: Re: [Users] so, what do you want next in oVirt?<br>> <br>> On 6-9-2013 10:12, Itamar Heim wrote:<br>> > On 09/05/2013 10:30 AM, noc wrote:<br>> >>>> On 08/21/2013 12:11 PM, Itamar Heim wrote:<br>> >>>>> On 08/21/2013 02:40 AM, Joop van de Wege wrote:<br>> >>>>>> <br>> >>>>>> What I would like to see in the ! next version is pxe boot of the<br>> >>>>>> nodes.<br>> >>>>>> Probably not easy to achieve because of dependency on dhcp.<br>> >>>>> <br>> >>>>> Hi Joop,<br>> >>>>> <br>> >>>>> can you please give a bit more information on the use case / how you<br>> >>>>> envision this?<br>> >>>>> <br>> >>>>> current thinking around bare metal provisioning of hosts is to extend<br>> >>>>> the functionality around the foreman provider for this, but you may<br>> >>>>> have other suggestions?<br>> >>>> <br>> >>>> I think Joop means to be able to add hosts (nodes) to a cluster by<br>> >>>> adding their MAC address to the dhcp list for PXE boot into ovirt-node<br>> >>>> and thus join the cluster. This would make it easy to add new physical<br>> >>>> nodes without any spinning disks or other local storage requirements.<br>> >>> <br>> >>> we started adding foreman integration in 3.3:<br>> >>> http://www.ovirt.org/Features/ForemanIntegration<br>> >>> <br>> >>> adding ohad and oved for their thoughts on this.<br>> >>> <br>> >>>> <br>> >>>> I suppose this may not be easy with complex network connections (bonds<br>> >>>> on mgmt network, mgmt network on a tagged vlan, etc), but it should be<br>> >>>> possible if the management network interface is plain and physical.<br>> >>>> <br>> >>>> /Simon<br>> >>>> <br>> >>>> PS, Perhaps Joop can confirm this idea, we've talked about it IRL.<br>> >>>> _______________________________________________<br>> >>>> Users mailing list<br>> >>>> Users@ovirt.org<br>> >>>> http://lists.ovirt.org/mailman/listinfo/users<br>> >>> <br>> >> This isn't about provisioning with Foreman. Its about having the compute<br>> >> nodes NOT having any spinning disks. So the only way to start a node is<br>> >> to pxeboot it and then let it (re)connect with the engine. Then it will<br>> >> be identified by engine as either a new node or a reconnecting node and<br>> >> it will get its configuration from the engine. For reference: thats how<br>> >> VirtualIron works. It has a managment network, just like ovirt, and on<br>> >> that it runs a tftp and dhcp server. Nodes are plugged into the<br>> >> managment network, without disk, and then pxe booted after which they<br>> >> appear in the webui as new unconfigured nodes. You then can set various<br>> >> settings and upon rebooting the nodes will recieve these settings<br>> >> because it is recognised by its mac address. The advantage of this<br>> >> construct is that you can place a new server into a rack, cable it,<br>> >> power on and go back to you office where you'll find the new node<br>> >> waiting to be configured. No messing around with CDs to install an OS,<br>> >> not being in the datacenter for hours on end, just in and out.<br>> >> <br>> >> Yes, disks are cheap but they brake down, need maintenance, means<br>> >> downtime and in general more admin time then when you don't have them. (<br>> >> its a shame to have a raid1 of 2 1Tb disk just to install an OS of less<br>> >> then 10G)<br>> > <br>> > just wondering, how do they prevent a rogue node/guest from<br>> > masquerading as such a host, getting access/secrets/VMs to be launched<br>> > on such an untrusted node (they could easily report a different mac<br>> > address if the layer 2 isn't hardened against that)?<br>> > <br>> They would need physical access to your rack which ofcourse is locked,<br>> you would need to powerdown/up which would trigger an alert, switch port<br>> down/up would trigger an alert, so probably you're notified that<br>> something not quite right is happening. I haven't gone through the<br>> source to see if there is more then just the mac address check.<br>> <br>> > other than that, yes. we actually used to have this via the<br>> > AutoApprovePatterns config option, which would have the engine approve<br>> > a pending node as it registers (I admit i don't think anyone used this<br>> > last several years, and it may be totally broken by now).<br>> > <br>> > please note this doesn't solve the need for a disk, just the<br>> > auto-registration part (if it still works)<br>> What I would like is to have the ovirt Node pxe booting and getting its<br>> config from engine or autoregister. I know there is a script which<br>> converts the iso into a huge pxeboot kernel but don't know how to solve<br>> or if its solved the config part.<br>> <br>> @karli:<br>> If you run your cluster in Memory Optimization=None then you won't need<br>> swap. Have been doing that for years and haven't had a single problem<br>> attributed to that. I just would like to have the choice, pxe boot the<br>> node and know that you don't have swap. Run with disks if you really<br>> need overprovisioning.<br>> <br>> Regards,<br>> <br>> Joop<br>> <br>> _______________________________________________<br>> Users mailing list<br>> Users@ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br>> <br>> <br>> _______________________________________________<br>> Users mailing list<br>> Users@ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br>> <br></div><br></div></body></html>