>> On 08/21/2013 12:11 PM, Itamar Heim wrote:
>>> On 08/21/2013 02:40 AM, Joop van de Wege wrote:
>>>>
>>>> What I would like to see in the ! next version is pxe boot of the
>>>> nodes.
>>>> Probably not easy to achieve because of dependency on dhcp.
>>>
>>> Hi Joop,
>>>
>>> can you please give a bit more information on the use case / how you
>>> envision this?
>>>
>>> current thinking around bare metal provisioning of hosts is to extend
>>> the functionality around the foreman provider for this, but you may
>>> have other suggestions?
>>
>> I think Joop means to be able to add hosts (nodes) to a cluster by
>> adding their MAC address to the dhcp list for PXE boot into ovirt-node
>> and thus join the cluster. This would make it easy to add new physical
>> nodes without any spinning disks or other local storage requirements.
>
> we started adding foreman integration in 3.3:
>
http://www.ovirt.org/Features/ForemanIntegration
>
> adding ohad and oved for their thoughts on this.
>
>>
>> I suppose this may not be easy with complex network connections (bonds
>> on mgmt network, mgmt network on a tagged vlan, etc), but it should be
>> possible if the management network interface is plain and physical.
>>
>> /Simon
>>
>> PS, Perhaps Joop can confirm this idea, we've talked about it IRL.
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>
This isn't about provisioning with Foreman. Its about having the compute
nodes NOT having any spinning disks. So the only way to start a node is
to pxeboot it and then let it (re)connect with the engine. Then it will
be identified by engine as either a new node or a reconnecting node and
it will get its configuration from the engine. For reference: thats how
VirtualIron works. It has a managment network, just like ovirt, and on
that it runs a tftp and dhcp server. Nodes are plugged into the
managment network, without disk, and then pxe booted after which they
appear in the webui as new unconfigured nodes. You then can set various
settings and upon rebooting the nodes will recieve these settings
because it is recognised by its mac address. The advantage of this
construct is that you can place a new server into a rack, cable it,
power on and go back to you office where you'll find the new node
waiting to be configured. No messing around with CDs to install an OS,
not being in the datacenter for hours on end, just in and out.
Yes, disks are cheap but they brake down, need maintenance, means
downtime and in general more admin time then when you don't have them. (
its a shame to have a raid1 of 2 1Tb disk just to install an OS of less
then 10G)
just wondering, how do they prevent a rogue node/guest from masquerading
as such a host, getting access/secrets/VMs to be launched on such an
untrusted node (they could easily report a different mac address if the
layer 2 isn't hardened against that)?
other than that, yes. we actually used to have this via the
AutoApprovePatterns config option, which would have the engine approve a
pending node as it registers (I admit i don't think anyone used this
last several years, and it may be totally broken by now).
please note this doesn't solve the need for a disk, just the
auto-registration part (if it still works)