Thanks for clarifying, makes sense now.
The public key trust needed for registration, is that the same key
that would be used when adding host via UI?
Any examples of how to use the HostDeployProtocol [1]? I like the
idea of using registration but haven't the slightest idea how to
implement what's described in the docs [1]. I do recall seeing an
article posted (searching email and can't find) that had a nice
walk-through of how to use the oVirt API using browser tools. I'm
unsure if this HostDeployProtocol would be done that way or via some
other method.
Thanks,
- Trey
[1]
----- Original Message -----
> From: "Trey Dockendorf" <treydock(a)gmail.com>
> To: "Alon Bar-Lev" <alonbl(a)redhat.com>
> Cc: "ybronhei" <ybronhei(a)redhat.com>, "users"
<users(a)ovirt.org>, "Fabian Deutsch" <fabiand(a)redhat.com>, "Dan
> Kenigsberg" <danken(a)redhat.com>, "Itamar Heim"
<iheim(a)redhat.com>, "Douglas Landgraf" <dougsland(a)redhat.com>,
"Oved
> Ourfali" <ovedo(a)redhat.com>
> Sent: Tuesday, August 5, 2014 10:45:12 PM
> Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration
options
>
> Excellent, so installing 'ovirt-host-deploy' on each node then
> configuring the /etc/ovirt-host-deploy.conf.d files seems very
> automate-able, will see how it works in practice.
you do not need to install the ovirt-host-deploy, just create the files.
> Regarding the actual host registration and getting the host added to
> ovirt-engine, are there other methods besides the API and the sdk?
> Would it be possible to configure the necessary
> ovirt-host-deploy.conf.d files then execute "ovirt-host-deploy"? I
> notice that running 'ovirt-host-deploy' wants to make whatever host
> executes it a ovir hypervisor but haven't yet run it all the way
> through as no server to test with at this time. There seems to be no
> "--help" or similar command line argument.
you should not run host-deploy directly, but via the engine's process, either
registration or add host as I replied previously.
when base system is ready, you issue add host via api of engine or via ui, the other
alternative is to register the host host the host-deploy protocol, and approve the host
via api of engine or via ui.
> I'm sure this will all be more clear once I attempt the steps and run
> through the motions. Will try to find a system to test on so I'm
> ready once our new servers arrive.
>
> Thanks,
> - Trey
>
> On Tue, Aug 5, 2014 at 2:23 PM, Alon Bar-Lev <alonbl(a)redhat.com> wrote:
> >
> >
> > ----- Original Message -----
> >> From: "Trey Dockendorf" <treydock(a)gmail.com>
> >> To: "Alon Bar-Lev" <alonbl(a)redhat.com>
> >> Cc: "ybronhei" <ybronhei(a)redhat.com>, "users"
<users(a)ovirt.org>, "Fabian
> >> Deutsch" <fabiand(a)redhat.com>, "Dan
> >> Kenigsberg" <danken(a)redhat.com>, "Itamar Heim"
<iheim(a)redhat.com>,
> >> "Douglas Landgraf" <dougsland(a)redhat.com>
> >> Sent: Tuesday, August 5, 2014 10:01:14 PM
> >> Subject: Re: [ovirt-users] Proper way to change and persist vdsm
> >> configuration options
> >>
> >> Ah, thank you for the input! Just so I'm not spending time
> >> implementing the wrong changes, let me confirm I understand your
> >> comments.
> >>
> >> 1) Deploy host with Foreman
> >> 2) Apply Puppet catalog including ovirt Puppet module
> >> 3) Initiate host-deploy via rest API
> >>
> >> In the ovirt module the following takes place:
> >>
> >> 2a) Add yum repos
> >> 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf
> >>
> >
> > you can have any # of files with any prefix :))
> >
> >> For #2b I have a few questions
> >>
> >> * The name of the ".conf" file is simply for sorting and
> >> labeling/organization, it has not functional impact on what those
> >> overrides apply to?
> >
> > right.
> >
> >> * That file is managed on the ovirt-engine server, not the actual nodes?
> >
> > currently on the host, in future we will provide a method to add this to
> > engine database[1]
> >
> > [1]
http://gerrit.ovirt.org/#/c/27064/
> >
> >> * Is there any way to apply overrides to specific hosts? For example
> >> if I have some hosts that require a config and others that don't, how
> >> would I separate those *.conf files? This is more theoretical as
> >> right now my setup is common across all nodes.
> >
> > the poppet module can put whatever required on each host.
> >
> >> For #3...the implementation of API calls from within Puppet is a
> >> challenge and one I can't tackle yet, but definitely will make it a
> >> goal for the future. In the mean time, what's the "manual"
way to
> >> initiate host-deploy? Is there a CLI command that would have the same
> >> result as an API call or is the recommended way to perform the API
> >> call manually (ie curl)?
> >
> > well, you can register host using the following protocol[1], but it is
> > difficult to do this securely, what you actually need is to establish ssh
> > trust for root with engine key then register.
> >
> > you can also use the register command using curl by something like (I have
> > not checked):
> >
https://admin%40internal:password@engine/ovirt-engine/api/hosts
> > ---
> > <?xml version="1.0" encoding="UTF-8"
standalone="yes"?>
> > <host>
> > <name>host1</name>
> > <address>dns</address>
> > <ssh>
> > <authentication_method>publickey</authentication_method>
> > </ssh>
> > <cluster id="cluster-uuid"/>
> > </host>
> > ---
> >
> > you can also use the ovirt-engine-sdk-python package:
> > ---
> > import ovirtsdk.api
> > import ovirtsdk.xml
> >
> > sdk = ovirtsdk.api.API(
> > url='https://host/ovirt-engine/api',
> > username='admin@internal',
> > password='password',
> > insecure=True,
> > )
> > sdk.hosts.add(
> > ovirtsdk.xml.params.Host(
> > name='host1',
> > address='host1',
> > cluster=engine_api.clusters.get(
> > 'cluster'
> > ),
> > ssh=self._ovirtsdk_xml.params.SSH(
> > authentication_method='publickey',
> > ),
> > )
> > )
> > ---
> >
> > [1]
http://www.ovirt.org/Features/HostDeployProtocol
> >
> >>
> >> Thanks!
> >> - Trey
> >>
> >> On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl(a)redhat.com>
wrote:
> >> >
> >> >
> >> > ----- Original Message -----
> >> >> From: "Trey Dockendorf" <treydock(a)gmail.com>
> >> >> To: "ybronhei" <ybronhei(a)redhat.com>
> >> >> Cc: "users" <users(a)ovirt.org>, "Fabian
Deutsch" <fabiand(a)redhat.com>,
> >> >> "Dan
> >> >> Kenigsberg" <danken(a)redhat.com>, "Itamar
> >> >> Heim" <iheim(a)redhat.com>, "Douglas Landgraf"
<dougsland(a)redhat.com>,
> >> >> "Alon
> >> >> Bar-Lev" <alonbl(a)redhat.com>
> >> >> Sent: Tuesday, August 5, 2014 9:36:24 PM
> >> >> Subject: Re: [ovirt-users] Proper way to change and persist vdsm
> >> >> configuration options
> >> >>
> >> >> On Tue, Aug 5, 2014 at 12:32 PM, ybronhei
<ybronhei(a)redhat.com> wrote:
> >> >> > Hey,
> >> >> >
> >> >> > Just noticed something that I forgot about..
> >> >> > before filing new BZ, see in ovirt-host-deploy
README.environment [1]
> >> >> > the
> >> >> > section:
> >> >> > VDSM/configOverride(bool) [True]
> >> >> > Override vdsm configuration file.
> >> >> >
> >> >> > changing it to false will keep your vdsm.conf file as is
after
> >> >> > deploying
> >> >> > the
> >> >> > host again (what happens after node upgrade)
> >> >> >
> >> >> > [1]
> >> >> >
https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
> >> >> >
> >> >> > please check if that what you meant..
> >> >> >
> >> >> > Thanks,
> >> >> > Yaniv Bronhaim.
> >> >> >
> >> >>
> >> >> I was unaware of that package. I will check that out as that seems
to
> >> >> be what I am looking for.
> >> >>
> >> >> I have not filed this in BZ and will hold off pending
> >> >> ovirt-host-deploy. If you feel a BZ is still necessary then please
do
> >> >> file one and I would be happy to provide input if it would help.
> >> >>
> >> >> Right now this is my workflow.
> >> >>
> >> >> 1. Foreman provisions bare-metal server with CentOS 6.5
> >> >> 2. Once provisioned and system rebooted Puppet applies
puppet-ovirt
> >> >> [1] module that adds the necessary yum repos
> >> >
> >> > and should stop here..
> >> >
> >> >> , and installs packages.
> >> >> Part of my Puppet deployment is basic things like sudo management
> >> >> (vdsm's sudo is account for), sssd configuration, and other
aspects
> >> >> that are needed by every system in my infrastructure. Part of the
> >> >> ovirt::node Puppet class is managing vdsm.conf, and in my case
that
> >> >> means ensuring iSER is enabled for iSCSI over IB.
> >> >
> >> > you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf
> >> > ---
> >> > VDSM_CONFIG/section/key=str:content
> >> > ---
> >> >
> >> > this will create a proper vdsm.conf when host-deploy is initiated.
> >> >
> >> > you should now use the rest api to initiate host-deploy.
> >> >
> >> >> 3. Once host is online and has had the full Puppet catalog applied
I
> >> >> log into ovirt-engine web interface and add those host (pulling
it's
> >> >> data via the Foreman provider).
> >> >
> >> > right, but you should let this process install packages and manage
> >> > configuration.
> >> >
> >> >> What I've noticed is that after step #3, after a host is added
by
> >> >> ovirt-engine, the vdsm.conf file is reset to default and I have to
> >> >> reapply Puppet before it can be used as the one of my Data Storage
> >> >> Domains requires iSER (not available over TCP).
> >> >
> >> > right, see above.
> >> >
> >> >> What would be the workflow using ovirt-host-deploy? Thus far
I've had
> >> >> to piece together my workflow based on the documentation and
filling
> >> >> in blanks where possible since I do require customizations to
> >> >> vdsm.conf and the documented workflow of adding a host via web UI
does
> >> >> not allow for such customization.
> >> >>
> >> >> Thanks,
> >> >> - Trey
> >> >>
> >> >> [1] -
https://github.com/treydock/puppet-ovirt (README not fully
> >> >> updated as still working out how to use Puppet with oVirt)
> >> >>
> >> >> >
> >> >> > On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
> >> >> >>
> >> >> >> I'll file BZ. As far as I can recall this has been an
issue since
> >> >> >> 3.3.x
> >> >> >> as
> >> >> >> I have been using Puppet to modify values and have had to
rerun
> >> >> >> Puppet
> >> >> >> after installing a node via GUI and when performing update
from GUI.
> >> >> >> Given
> >> >> >> that it has occurred when VDSM version didn't change
on the node it
> >> >> >> seems
> >> >> >> likely to be something being done by Python code that
bootstraps a
> >> >> >> node
> >> >> >> and
> >> >> >> performs the other tasks. I won't have any systems
available to
> >> >> >> test
> >> >> >> with
> >> >> >> for a few days. New hardware specifically for our oVirt
deployment
> >> >> >> is
> >> >> >> on
> >> >> >> order so should be able to more thoroughly debug and
capture logs at
> >> >> >> that
> >> >> >> time.
> >> >> >>
> >> >> >> Would using vdsm-reg be a better solution for adding new
nodes? I
> >> >> >> only
> >> >> >> tried using vdsm-reg once and it went very poorly...lots
of missing
> >> >> >> dependencies not pulled in from yum install I had to
install
> >> >> >> manually
> >> >> >> via
> >> >> >> yum. Then the node was auto added to newest cluster with
no ability
> >> >> >> to
> >> >> >> change the cluster. Be happy to debug that too if
there's some docs
> >> >> >> that
> >> >> >> outline the expected behavior.
> >> >> >>
> >> >> >> Using vdsm-reg or something similar seems like a better
fit for
> >> >> >> puppet
> >> >> >> deployed nodes, as opposed to requiring GUI steps to add
the node.
> >> >> >>
> >> >> >> Thanks
> >> >> >> - Trey
> >> >> >> On Aug 4, 2014 5:53 AM, "ybronhei"
<ybronhei(a)redhat.com> wrote:
> >> >> >>
> >> >> >>> On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
> >> >> >>>
> >> >> >>>> I'm running ovirt nodes that are stock CentOS
6.5 systems with
> >> >> >>>> VDSM
> >> >> >>>> installed. I am using iSER to do iSCSI over RDMA
and to make that
> >> >> >>>> work I have to modify /etc/vdsm/vdsm.conf to
include the
> >> >> >>>> following:
> >> >> >>>>
> >> >> >>>> [irs]
> >> >> >>>> iscsi_default_ifaces = iser,default
> >> >> >>>>
> >> >> >>>> I've noticed that any time I upgrade a node
from the engine web
> >> >> >>>> interface that changes to vdsm.conf are wiped out.
I don't know
> >> >> >>>> if
> >> >> >>>> this is being done by the configuration code or by
the vdsm
> >> >> >>>> package.
> >> >> >>>> Is there a more reliable way to ensure changes to
vdsm.conf are
> >> >> >>>> NOT
> >> >> >>>> removed automatically?
> >> >> >>>>
> >> >> >>>
> >> >> >>> Hey,
> >> >> >>>
> >> >> >>> vdsm.conf shouldn't wiped out and shouldn't
changed at all during
> >> >> >>> upgrade.
> >> >> >>> other related conf files (such as libvirtd.conf) might
be overrided
> >> >> >>> to
> >> >> >>> keep
> >> >> >>> defaults configurations for vdsm. but vdsm.conf should
persist with
> >> >> >>> user's
> >> >> >>> modification. from my check, regular yum upgrade
doesn't touch
> >> >> >>> vdsm.conf
> >> >> >>>
> >> >> >>> Douglas can you verify that with node upgrade? might
be specific to
> >> >> >>> that
> >> >> >>> flow..
> >> >> >>>
> >> >> >>> Trey, can file a bugzilla on that and describe your
steps there?
> >> >> >>>
> >> >> >>> Thanks
> >> >> >>>
> >> >> >>> Yaniv Bronhaim,
> >> >> >>>
> >> >> >>>>
> >> >> >>>> Thanks,
> >> >> >>>> - Trey
> >> >> >>>> _______________________________________________
> >> >> >>>> Users mailing list
> >> >> >>>> Users(a)ovirt.org
> >> >> >>>>
http://lists.ovirt.org/mailman/listinfo/users
> >> >> >>>>
> >> >> >>>>
> >> >> >>>
> >> >> >>> --
> >> >> >>> Yaniv Bronhaim.
> >> >> >>>
> >> >> >>
> >> >> >
> >> >>
> >>
>