------=_Part_367_18577940.1378732117836
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi Doron,
But first you have to install the engine, before the VM. So, the idea is to make a backup
and restore it to a VM?
----- Original Message -----
From: "Doron Fediuck" <dfediuck(a)redhat.com>
To: suporte(a)logicworks.pt
Cc: users(a)ovirt.org
Sent: Domingo, 8 de Setembro de 2013 23:06:20
Subject: Re: [Users] so, what do you want next in oVirt?
Hi Jose,
the latter is available by hosted engine, which is a highly
available VM which will be migrated / restarted on a different
host if something goes wrong.
----- Original Message -----
From: suporte(a)logicworks.pt
To: users(a)ovirt.org
Sent: Friday, September 6, 2013 1:43:04 PM
Subject: Re: [Users] so, what do you want next in oVirt?
Could be great o have on the Engine:
- An upload option for the ISO files
- A backup and restore option
- An high availability for the engine: install the engine on 2 platforms
(hardware?), than integrate them for synchronization
Jose
From: "noc" <noc(a)nieuwland.nl>
Cc: users(a)ovirt.org
Sent: Sexta-feira, 6 de Setembro de 2013 10:28:09
Subject: Re: [Users] so, what do you want next in oVirt?
On 6-9-2013 10:12, Itamar Heim wrote:
> On 09/05/2013 10:30 AM, noc wrote:
>>>> On 08/21/2013 12:11 PM, Itamar Heim wrote:
>>>>> On 08/21/2013 02:40 AM, Joop van de Wege wrote:
>>>>>>
>>>>>> What I would like to see in the ! next version is pxe boot of
the
>>>>>> nodes.
>>>>>> Probably not easy to achieve because of dependency on dhcp.
>>>>>
>>>>> Hi Joop,
>>>>>
>>>>> can you please give a bit more information on the use case / how you
>>>>> envision this?
>>>>>
>>>>> current thinking around bare metal provisioning of hosts is to
extend
>>>>> the functionality around the foreman provider for this, but you may
>>>>> have other suggestions?
>>>>
>>>> I think Joop means to be able to add hosts (nodes) to a cluster by
>>>> adding their MAC address to the dhcp list for PXE boot into ovirt-node
>>>> and thus join the cluster. This would make it easy to add new physical
>>>> nodes without any spinning disks or other local storage requirements.
>>>
>>> we started adding foreman integration in 3.3:
>>>
http://www.ovirt.org/Features/ForemanIntegration
>>>
>>> adding ohad and oved for their thoughts on this.
>>>
>>>>
>>>> I suppose this may not be easy with complex network connections (bonds
>>>> on mgmt network, mgmt network on a tagged vlan, etc), but it should be
>>>> possible if the management network interface is plain and physical.
>>>>
>>>> /Simon
>>>>
>>>> PS, Perhaps Joop can confirm this idea, we've talked about it IRL.
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>> This isn't about provisioning with Foreman. Its about having the compute
>> nodes NOT having any spinning disks. So the only way to start a node is
>> to pxeboot it and then let it (re)connect with the engine. Then it will
>> be identified by engine as either a new node or a reconnecting node and
>> it will get its configuration from the engine. For reference: thats how
>> VirtualIron works. It has a managment network, just like ovirt, and on
>> that it runs a tftp and dhcp server. Nodes are plugged into the
>> managment network, without disk, and then pxe booted after which they
>> appear in the webui as new unconfigured nodes. You then can set various
>> settings and upon rebooting the nodes will recieve these settings
>> because it is recognised by its mac address. The advantage of this
>> construct is that you can place a new server into a rack, cable it,
>> power on and go back to you office where you'll find the new node
>> waiting to be configured. No messing around with CDs to install an OS,
>> not being in the datacenter for hours on end, just in and out.
>>
>> Yes, disks are cheap but they brake down, need maintenance, means
>> downtime and in general more admin time then when you don't have them. (
>> its a shame to have a raid1 of 2 1Tb disk just to install an OS of less
>> then 10G)
>
> just wondering, how do they prevent a rogue node/guest from
> masquerading as such a host, getting access/secrets/VMs to be launched
> on such an untrusted node (they could easily report a different mac
> address if the layer 2 isn't hardened against that)?
>
They would need physical access to your rack which ofcourse is locked,
you would need to powerdown/up which would trigger an alert, switch port
down/up would trigger an alert, so probably you're notified that
something not quite right is happening. I haven't gone through the
source to see if there is more then just the mac address check.
> other than that, yes. we actually used to have this via the
> AutoApprovePatterns config option, which would have the engine approve
> a pending node as it registers (I admit i don't think anyone used this
> last several years, and it may be totally broken by now).
>
> please note this doesn't solve the need for a disk, just the
> auto-registration part (if it still works)
What I would like is to have the ovirt Node pxe booting and getting its
config from engine or autoregister. I know there is a script which
converts the iso into a huge pxeboot kernel but don't know how to solve
or if its solved the config part.
@karli:
If you run your cluster in Memory Optimization=None then you won't need
swap. Have been doing that for years and haven't had a single problem
attributed to that. I just would like to have the choice, pxe boot the
node and know that you don't have swap. Run with disks if you really
need overprovisioning.
Regards,
Joop
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
------=_Part_367_18577940.1378732117836
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><style type=3D'text/css'>p { margin: 0;
}</style></head><body><=
div style=3D'font-family: arial,helvetica,sans-serif; font-size: 10pt; colo=
r: #000000'>Hi Doron,<br><br>But first you have to install the engine,
befo=
re the VM. So, the idea is to make a backup and restore it to a VM?<br><br>=
<hr id=3D"zwchr"><div style=3D"color: rgb(0, 0, 0); font-weight:
normal; fo=
nt-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-=
serif; font-size: 12pt;"><b>From: </b>"Doron Fediuck"
&lt;dfediuck(a)redhat.c=
om><br><b>To: </b>suporte(a)logicworks.pt<br><b>Cc:
</b>users(a)ovirt.org<br=
<b>Sent: </b>Domingo, 8 de Setembro de 2013
23:06:20<br><b>Subject: </b>Re=
: [Users] so, what do you want
next in oVirt?<br><br>Hi Jose,<br>the latter=
is available by hosted engine, which is a highly<br>available VM which wil=
l be migrated / restarted on a different<br>host if something goes wrong.<b=
r><br>----- Original Message -----<br>> From:
suporte(a)logicworks.pt<br>&=
gt; To: users(a)ovirt.org<br>&gt; Sent: Friday, September 6, 2013 1:43:04 PM<=
br>> Subject: Re: [Users] so, what do you want next in oVirt?<br>>
<b=
r>> Could be great o have on the Engine:<br>> - An upload option
for =
the ISO files<br>> - A backup and restore option<br>> - An high
avail=
ability for the engine: install the engine on 2 platforms<br>> (hardware=
?), than integrate them for synchronization<br>> <br>>
Jose<br>> <=
br>> <br>> From: "noc"
&lt;noc(a)nieuwland.nl&gt;<br>&gt; Cc: users@ovi=
rt.org<br>> Sent: Sexta-feira, 6 de Setembro de 2013
10:28:09<br>> Su=
bject: Re: [Users] so, what do you want next in oVirt?<br>>
<br>> On =
6-9-2013 10:12, Itamar Heim wrote:<br>> > On 09/05/2013 10:30 AM,
noc=
wrote:<br>> >>>> On 08/21/2013 12:11 PM, Itamar
Heim wrote:=
<br>> >>>>> On 08/21/2013 02:40 AM, Joop
van de Wege wrot=
e:<br>> >>>>>> <br>>
>>>>>> What =
I would like to see in the ! next version is pxe boot of the<br>>
>&g=
t;>>>> nodes.<br>>
>>>>>> Probably not eas=
y to achieve because of dependency on dhcp.<br>>
>>>>> <b=
r>> >>>>> Hi Joop,<br>>
>>>>> <br>> =
>>>>> can you please give a bit more information on the
use =
case / how you<br>> >>>>> envision
this?<br>> >>=
>>> <br>> >>>>>
current thinking around bare met=
al provisioning of hosts is to extend<br>>
>>>>> the func=
tionality around the foreman provider for this, but you may<br>>
>>=
;>>> have other suggestions?<br>>
>>>> <br>> >=
;>>> I think Joop means to be able to add hosts (nodes) to a clust=
er by<br>> >>>> adding their MAC address to the
dhcp list fo=
r PXE boot into ovirt-node<br>> >>>> and thus
join the clust=
er. This would make it easy to add new physical<br>>
>>>> no=
des without any spinning disks or other local storage requirements.<br>>=
>>> <br>> >>> we started adding
foreman integration i=
n 3.3:<br>> >>>
http://www.ovirt.org/Features/ForemanIntegratio=
n<br>> >>> <br>> >>>
adding ohad and oved for their=
thoughts on this.<br>> >>> <br>>
>>>> <br>> =
>>>> I suppose this may not be easy with complex network
connec=
tions (bonds<br>> >>>> on mgmt network, mgmt
network on a ta=
gged vlan, etc), but it should be<br>> >>>>
possible if the =
management network interface is plain and physical.<br>>
>>>>=
; <br>> >>>> /Simon<br>>
>>>> <br>> >&g=
t;>> PS, Perhaps Joop can confirm this idea, we've talked about it IR=
L.<br>> >>>>
_______________________________________________=
<br>> >>>> Users mailing list<br>>
>>>> Users=
@ovirt.org<br>> >>>>
http://lists.ovirt.org/mailman/listinfo=
/users<br>> >>> <br>> >> This
isn't about provisioning=
with Foreman. Its about having the compute<br>> >> nodes NOT
havi=
ng any spinning disks. So the only way to start a node is<br>>
>> =
to pxeboot it and then let it (re)connect with the engine. Then it will<br>=
> >> be identified by engine as either a new node or a reconnectin=
g node and<br>> >> it will get its configuration from the
engine. =
For reference: thats how<br>> >> VirtualIron works. It has a
manag=
ment network, just like ovirt, and on<br>> >> that it runs a
tftp =
and dhcp server. Nodes are plugged into the<br>> >> managment
netw=
ork, without disk, and then pxe booted after which they<br>> >>
ap=
pear in the webui as new unconfigured nodes. You then can set various<br>&g=
t; >> settings and upon rebooting the nodes will recieve these settin=
gs<br>> >> because it is recognised by its mac address. The
advant=
age of this<br>> >> construct is that you can place a new
server i=
nto a rack, cable it,<br>> >> power on and go back to you
office w=
here you'll find the new node<br>> >> waiting to be
configured. No=
messing around with CDs to install an OS,<br>> >> not being in
th=
e datacenter for hours on end, just in and out.<br>> >>
<br>> &=
gt;> Yes, disks are cheap but they brake down, need maintenance, means<b=
r>> >> downtime and in general more admin time then when you
don't=
have them. (<br>> >> its a shame to have a raid1 of 2 1Tb disk
ju=
st to install an OS of less<br>> >> then 10G)<br>>
> <br>>=
; > just wondering, how do they prevent a rogue node/guest from<br>>
=
> masquerading as such a host, getting access/secrets/VMs to be launched=
<br>> > on such an untrusted node (they could easily report a
differe=
nt mac<br>> > address if the layer 2 isn't hardened against
that)?<br=
> > <br>> They would need physical access
to your rack which ofco=
urse is locked,<br>> you would need to
powerdown/up which would trigger =
an alert, switch port<br>> down/up would trigger an alert, so probably y=
ou're notified that<br>> something not quite right is happening. I
haven=
't gone through the<br>> source to see if there is more then just the
ma=
c address check.<br>> <br>> > other than that, yes. we
actually us=
ed to have this via the<br>> > AutoApprovePatterns config option,
whi=
ch would have the engine approve<br>> > a pending node as it
register=
s (I admit i don't think anyone used this<br>> > last several
years, =
and it may be totally broken by now).<br>> > <br>>
> please not=
e this doesn't solve the need for a disk, just the<br>> >
auto-regist=
ration part (if it still works)<br>> What I would like is to have the ov=
irt Node pxe booting and getting its<br>> config from engine or autoregi=
ster. I know there is a script which<br>> converts the iso into a huge p=
xeboot kernel but don't know how to solve<br>> or if its solved the
conf=
ig part.<br>> <br>> @karli:<br>> If you run your
cluster in Memory=
Optimization=3DNone then you won't need<br>> swap. Have been doing
that=
for years and haven't had a single problem<br>> attributed to that. I
j=
ust would like to have the choice, pxe boot the<br>> node and know that =
you don't have swap. Run with disks if you really<br>> need
overprovisio=
ning.<br>> <br>> Regards,<br>> <br>>
Joop<br>> <br>> ____=
___________________________________________<br>> Users mailing
list<br>&=
gt; Users(a)ovirt.org<br>&gt;
http://lists.ovirt.org/mailman/listinfo/users<b=
r>> <br>> <br>>
_______________________________________________<br=
> Users mailing list<br>>
Users(a)ovirt.org<br>&gt;
http://lists.ovirt=
.org/mailman/listinfo/users<br>>
<br></div><br></div></body></html>
------=_Part_367_18577940.1378732117836--