------=_Part_196_10883757.1378464169793
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Could be great o have on the Engine:
- An upload option for the ISO files
- A backup and restore option
- An high availability for the engine: install the engine on 2 platforms (hardware?), than
integrate them for synchronization
Jose
----- Original Message -----
From: "noc" <noc(a)nieuwland.nl>
Cc: users(a)ovirt.org
Sent: Sexta-feira, 6 de Setembro de 2013 10:28:09
Subject: Re: [Users] so, what do you want next in oVirt?
On 6-9-2013 10:12, Itamar Heim wrote:
On 09/05/2013 10:30 AM, noc wrote:
>>> On 08/21/2013 12:11 PM, Itamar Heim wrote:
>>>> On 08/21/2013 02:40 AM, Joop van de Wege wrote:
>>>>>
>>>>> What I would like to see in the ! next version is pxe boot of the
>>>>> nodes.
>>>>> Probably not easy to achieve because of dependency on dhcp.
>>>>
>>>> Hi Joop,
>>>>
>>>> can you please give a bit more information on the use case / how you
>>>> envision this?
>>>>
>>>> current thinking around bare metal provisioning of hosts is to extend
>>>> the functionality around the foreman provider for this, but you may
>>>> have other suggestions?
>>>
>>> I think Joop means to be able to add hosts (nodes) to a cluster by
>>> adding their MAC address to the dhcp list for PXE boot into ovirt-node
>>> and thus join the cluster. This would make it easy to add new physical
>>> nodes without any spinning disks or other local storage requirements.
>>
>> we started adding foreman integration in 3.3:
>>
http://www.ovirt.org/Features/ForemanIntegration
>>
>> adding ohad and oved for their thoughts on this.
>>
>>>
>>> I suppose this may not be easy with complex network connections (bonds
>>> on mgmt network, mgmt network on a tagged vlan, etc), but it should be
>>> possible if the management network interface is plain and physical.
>>>
>>> /Simon
>>>
>>> PS, Perhaps Joop can confirm this idea, we've talked about it IRL.
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>
> This isn't about provisioning with Foreman. Its about having the compute
> nodes NOT having any spinning disks. So the only way to start a node is
> to pxeboot it and then let it (re)connect with the engine. Then it will
> be identified by engine as either a new node or a reconnecting node and
> it will get its configuration from the engine. For reference: thats how
> VirtualIron works. It has a managment network, just like ovirt, and on
> that it runs a tftp and dhcp server. Nodes are plugged into the
> managment network, without disk, and then pxe booted after which they
> appear in the webui as new unconfigured nodes. You then can set various
> settings and upon rebooting the nodes will recieve these settings
> because it is recognised by its mac address. The advantage of this
> construct is that you can place a new server into a rack, cable it,
> power on and go back to you office where you'll find the new node
> waiting to be configured. No messing around with CDs to install an OS,
> not being in the datacenter for hours on end, just in and out.
>
> Yes, disks are cheap but they brake down, need maintenance, means
> downtime and in general more admin time then when you don't have them. (
> its a shame to have a raid1 of 2 1Tb disk just to install an OS of less
> then 10G)
just wondering, how do they prevent a rogue node/guest from
masquerading as such a host, getting access/secrets/VMs to be launched
on such an untrusted node (they could easily report a different mac
address if the layer 2 isn't hardened against that)?
They would need physical access to your rack which ofcourse is locked,
you would need to powerdown/up which would trigger an alert, switch port
down/up would trigger an alert, so probably you're notified that
something not quite right is happening. I haven't gone through the
source to see if there is more then just the mac address check.
other than that, yes. we actually used to have this via the
AutoApprovePatterns config option, which would have the engine approve
a pending node as it registers (I admit i don't think anyone used this
last several years, and it may be totally broken by now).
please note this doesn't solve the need for a disk, just the
auto-registration part (if it still works)
What I would like is to have the ovirt
Node pxe booting and getting its
config from engine or autoregister. I know there is a script which
converts the iso into a huge pxeboot kernel but don't know how to solve
or if its solved the config part.
@karli:
If you run your cluster in Memory Optimization=None then you won't need
swap. Have been doing that for years and haven't had a single problem
attributed to that. I just would like to have the choice, pxe boot the
node and know that you don't have swap. Run with disks if you really
need overprovisioning.
Regards,
Joop
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
------=_Part_196_10883757.1378464169793
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><style type=3D'text/css'>p { margin: 0;
}</style></head><body><=
div style=3D'font-family: arial,helvetica,sans-serif; font-size: 10pt; colo=
r: #000000'>Could be great o have on the Engine:<br>- An upload option for =
the ISO files<br>- A backup and restore option<br>- An high availability fo=
r the engine: install the engine on 2 platforms (hardware?), than integrate=
them for synchronization<br><br>Jose<br><br><hr
id=3D"zwchr"><div style=3D=
"color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decorat=
ion: none; font-family: Helvetica,Arial,sans-serif; font-size:
12pt;"><b>Fr=
om: </b>"noc" &lt;noc(a)nieuwland.nl&gt;<br><b>Cc:
</b>users(a)ovirt.org<br><b>=
Sent: </b>Sexta-feira, 6 de Setembro de 2013 10:28:09<br><b>Subject:
</b>Re=
: [Users] so, what do you want next in oVirt?<br><br>On 6-9-2013 10:12, Ita=
mar Heim wrote:<br>> On 09/05/2013 10:30 AM, noc
wrote:<br>>>>&=
gt; On 08/21/2013 12:11 PM, Itamar Heim
wrote:<br>>>>>> On 0=
8/21/2013 02:40 AM, Joop van de Wege
wrote:<br>>>>>>><br>=
>>>>>> What I would like to see in the ! next
version is =
pxe boot of the<br>>>>>>>
nodes.<br>>>>>>&=
gt; Probably not easy to achieve because of dependency on
dhcp.<br>>>=
>>><br>>>>>> Hi
Joop,<br>>>>>><br>&g=
t;>>>> can you please give a bit more information on the use
ca=
se / how you<br>>>>>> envision
this?<br>>>>>>=
<br>>>>>> current thinking around bare metal
provisioning of=
hosts is to extend<br>>>>>> the functionality
around the fo=
reman provider for this, but you may<br>>>>>>
have other sug=
gestions?<br>>>>><br>>>>>
I think Joop means to be =
able to add hosts (nodes) to a cluster by<br>>>>> adding
their =
MAC address to the dhcp list for PXE boot into
ovirt-node<br>>>>&g=
t; and thus join the cluster. This would make it easy to add new physical<b=
r>>>>> nodes without any spinning disks or other local
storage =
requirements.<br>>>><br>>>> we
started adding foreman int=
egration in 3.3:<br>>>>
http://www.ovirt.org/Features/ForemanInteg=
ration<br>>>><br>>>> adding ohad and
oved for their thoug=
hts on
this.<br>>>><br>>>>><br>>>>>
I supp=
ose this may not be easy with complex network connections
(bonds<br>>>=
;>> on mgmt network, mgmt network on a tagged vlan, etc), but it shou=
ld be<br>>>>> possible if the management network
interface is p=
lain and
physical.<br>>>>><br>>>>>
/Simon<br>>&g=
t;>><br>>>>> PS, Perhaps Joop can confirm
this idea, we'v=
e talked about it IRL.<br>>>>>
________________________________=
_______________<br>>>>> Users mailing
list<br>>>>> =
Users(a)ovirt.org<br>&gt;&gt;&gt;&gt;
http://lists.ovirt.org/mailman/listinfo=
/users<br>>>><br>>> This isn't about
provisioning with Forem=
an. Its about having the compute<br>>> nodes NOT having any spinning
=
disks. So the only way to start a node is<br>>> to pxeboot it and
the=
n let it (re)connect with the engine. Then it will<br>>> be
identifie=
d by engine as either a new node or a reconnecting node and<br>>> it
=
will get its configuration from the engine. For reference: thats how<br>>=
;> VirtualIron works. It has a managment network, just like ovirt, and o=
n<br>>> that it runs a tftp and dhcp server. Nodes are plugged into
t=
he<br>>> managment network, without disk, and then pxe booted after
w=
hich they<br>>> appear in the webui as new unconfigured nodes. You
th=
en can set various<br>>> settings and upon rebooting the nodes will
r=
ecieve these settings<br>>> because it is recognised by its mac
addre=
ss. The advantage of this<br>>> construct is that you can place a
new=
server into a rack, cable it,<br>>> power on and go back to you
offi=
ce where you'll find the new node<br>>> waiting to be configured.
No =
messing around with CDs to install an OS,<br>>> not being in the
data=
center for hours on end, just in and
out.<br>>><br>>> Yes, disk=
s are cheap but they brake down, need maintenance, means<br>>>
downti=
me and in general more admin time then when you don't have them.
(<br>>&=
gt; its a shame to have a raid1 of 2 1Tb disk just to install an OS of less=
<br>>> then 10G)<br>><br>> just wondering,
how do they prevent =
a rogue node/guest from <br>> masquerading as such a host, getting acces=
s/secrets/VMs to be launched <br>> on such an untrusted node (they could=
easily report a different mac <br>> address if the layer 2 isn't
harden=
ed against that)?<br>><br>They would need physical access to your rack
w=
hich ofcourse is locked, <br>you would need to powerdown/up which would tri=
gger an alert, switch port <br>down/up would trigger an alert, so probably =
you're notified that <br>something not quite right is happening. I
ha=
ven't gone through the <br>source to see if there is more then just the mac=
address check.<br><br>> other than that, yes. we actually used to have
=
this via the <br>> AutoApprovePatterns config option, which would have t=
he engine approve <br>> a pending node as it registers (I admit i don't
=
think anyone used this <br>> last several years, and it may be totally b=
roken by now).<br>><br>> please note this doesn't solve the
need for =
a disk, just the <br>> auto-registration part (if it still
works)<br>Wha=
t I would like is to have the ovirt Node pxe booting and getting its <br>co=
nfig from engine or autoregister. I know there is a script which <br>conver=
ts the iso into a huge pxeboot kernel but don't know how to solve <br>or if=
its solved the config part.<br><br>@karli:<br>If you run your cluster
in M=
emory Optimization=3DNone then you won't need <br>swap. Have been doing tha=
t for years and haven't had a single problem <br>attributed to that. I just=
would like to have the choice, pxe boot the <br>node and know that you don=
't have swap. Run with disks if you really <br>need
overprovisioning.<br><b=
r>Regards,<br><br>Joop<br><br>_____________________________________________=
__<br>Users mailing
list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailm=
an/listinfo/users<br></div><br></div></body></html>
------=_Part_196_10883757.1378464169793--