Proper way to change and persist vdsm configuration options

I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following: [irs] iscsi_default_ifaces = iser,default I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically? Thanks, - Trey

On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following:
[irs] iscsi_default_ifaces = iser,default
I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically?
Hey, vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf Douglas can you verify that with node upgrade? might be specific to that flow.. Trey, can file a bugzilla on that and describe your steps there? Thanks Yaniv Bronhaim,
Thanks, - Trey _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Yaniv Bronhaim.

I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time. Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior. Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node. Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following:
[irs] iscsi_default_ifaces = iser,default
I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically?
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
Thanks, - Trey _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Yaniv Bronhaim.

I'll file BZ. As far as I can recall this has been an issue since 3.3.x as
I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
On 08/05/2014 08:12 AM, Trey Dockendorf wrote: please send a link to the BZ, or assign me there please consult Fabian\Douglas on using vdsm-reg, though, I don't think it will do any change. after vdsm-reg engine still runs host-deploy process and this might overrides your vdsm.conf (which I need to test)
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following:
[irs] iscsi_default_ifaces = iser,default
I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically?
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
Thanks, - Trey _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Yaniv Bronhaim.
-- Yaniv Bronhaim.

Hey, Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file. changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade) [1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment please check if that what you meant.. Thanks, Yaniv Bronhaim. On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following:
[irs] iscsi_default_ifaces = iser,default
I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically?
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
Thanks, - Trey _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Yaniv Bronhaim.

Hi, Do you actually use puppet over ovirt-node? This is unsupported. Regards, Alon ----- Original Message -----
From: "ybronhei" <ybronhei@redhat.com> To: "Trey Dockendorf" <treydock@gmail.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Alon Bar-Lev" <alonbl@redhat.com> Sent: Tuesday, August 5, 2014 8:32:04 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following:
[irs] iscsi_default_ifaces = iser,default
I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically?
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
Thanks, - Trey _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Yaniv Bronhaim.

On Tue, Aug 5, 2014 at 12:37 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
Hi,
Do you actually use puppet over ovirt-node? This is unsupported.
Regards, Alon
I use Puppet to configure everything on the system and some of those things conflict with changes made by ovirt-engine when adding a node so I've moved to managing those changes in an ovirt Puppet module. When you refer to "ovirt-node" your refering to the pre-built ISO images? AFAIK the ovirt-node is not an option for me as I use Infiniband to do all storage connections. Currently NFS is done using IPoIB (NFS over RDMA was crashing so I am not pursuing at the moment) and iSCSI is done using iSER. At some point in time I'd like to use IB interfaces in my guests utilizing SR-IOV which my Mellanox cards support, but I'm postponing that project till after our HPC cluster's upgraded :) Right now the Puppet module does the following: * Configures firewall rules just like what is done by ovirt (with exception of a few not supported by puppetlabs-firewall, so I override global directive that purges unknown firewall rules) ** This is necessary as I have to add other Firewall rules, such as Zabbix * Exports /etc/hosts entry for the node that is collected by ovirt-engine host so that if DNS goes down ovirt-engine does not lose access to ovirtmgmt interfaces * Install yum repos for ovirt * install vdsm * ensure vdsm.conf exists * Populates /etc/vdsm/vdsm.id (IIRC a bug in previous ovirt required this) * Ensures vdsmd is running and will start at boot * Ensures vdsm sudo rules are present. * Manages default vdsm.conf configurations as a Puppet type, "vdsm_config", rather than managing file contents via template (allows for purging unmanaged entries also) A lot of the above is handled by ovirt already, but in the past customizations were not possible (firewall rules, vdsm.conf entries, etc) so if I was going to have to manage those separately I wanted them in Puppet :) Now that I'm aware of "ovirt-host-deploy" and have seen the potential of using vdsm-reg in Puppet, I'm curious what is the "right" way to automate node deployment's in oVirt. Ideally I could still use Puppet to configure the method or fill in the "gaps" for customizations that are needed (ie enabling iSER). I'd be glad to know what the recommended method for automating ovirt would be, and would be happy to refactor my module in hopes it would offer other Puppet users a quick-start way to deploy oVirt while still doing things the "ovirt way". If the right way is using the ovirt-node images then I'd like to know what customizations are possible on those images and so on. oVirt is moving very rapidly and despite using ovirt for a long time I'm still learning new things about it almost daily, so forgive any of my assumptions above that may be wrong :). Thanks, - Trey
----- Original Message -----
From: "ybronhei" <ybronhei@redhat.com> To: "Trey Dockendorf" <treydock@gmail.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Alon Bar-Lev" <alonbl@redhat.com> Sent: Tuesday, August 5, 2014 8:32:04 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following:
[irs] iscsi_default_ifaces = iser,default
I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically?
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
Thanks, - Trey _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Yaniv Bronhaim.

----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "ybronhei" <ybronhei@redhat.com> Sent: Tuesday, August 5, 2014 9:50:04 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
On Tue, Aug 5, 2014 at 12:37 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
Hi,
Do you actually use puppet over ovirt-node? This is unsupported.
Regards, Alon
I use Puppet to configure everything on the system and some of those things conflict with changes made by ovirt-engine when adding a node so I've moved to managing those changes in an ovirt Puppet module.
When you refer to "ovirt-node" your refering to the pre-built ISO images? AFAIK the ovirt-node is not an option for me as I use Infiniband to do all storage connections. Currently NFS is done using IPoIB (NFS over RDMA was crashing so I am not pursuing at the moment) and iSCSI is done using iSER. At some point in time I'd like to use IB interfaces in my guests utilizing SR-IOV which my Mellanox cards support, but I'm postponing that project till after our HPC cluster's upgraded :)
Right now the Puppet module does the following:
* Configures firewall rules just like what is done by ovirt (with exception of a few not supported by puppetlabs-firewall, so I override global directive that purges unknown firewall rules)
this should be done at engine side, using engine-config, seek IPTablesConfig or IPTablesConfigSiteCustom in 3.5.
** This is necessary as I have to add other Firewall rules, such as Zabbix * Exports /etc/hosts entry for the node that is collected by ovirt-engine host so that if DNS goes down ovirt-engine does not lose access to ovirtmgmt interfaces
why not just have a backup for dns?
* Install yum repos for ovirt
this should not be done within ovirt-node.
* install vdsm
this should be done by host-deploy.
* ensure vdsm.conf exists * Populates /etc/vdsm/vdsm.id (IIRC a bug in previous ovirt required this)
why? you machine should have valid bios uuid.
* Ensures vdsmd is running and will start at boot * Ensures vdsm sudo rules are present. * Manages default vdsm.conf configurations as a Puppet type, "vdsm_config", rather than managing file contents via template (allows for purging unmanaged entries also)
the above should be done via host-deploy.
A lot of the above is handled by ovirt already, but in the past customizations were not possible (firewall rules, vdsm.conf entries, etc) so if I was going to have to manage those separately I wanted them in Puppet :)
Now that I'm aware of "ovirt-host-deploy" and have seen the potential of using vdsm-reg in Puppet, I'm curious what is the "right" way to automate node deployment's in oVirt. Ideally I could still use Puppet to configure the method or fill in the "gaps" for customizations that are needed (ie enabling iSER).
I'd be glad to know what the recommended method for automating ovirt would be, and would be happy to refactor my module in hopes it would offer other Puppet users a quick-start way to deploy oVirt while still doing things the "ovirt way".
If the right way is using the ovirt-node images then I'd like to know what customizations are possible on those images and so on. oVirt is moving very rapidly and despite using ovirt for a long time I'm still learning new things about it almost daily, so forgive any of my assumptions above that may be wrong :).
ovirt-node is not flexible as standard host, if you require sssd and misc packages I suggest you stick with generic host.
Thanks, - Trey
----- Original Message -----
From: "ybronhei" <ybronhei@redhat.com> To: "Trey Dockendorf" <treydock@gmail.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Alon Bar-Lev" <alonbl@redhat.com> Sent: Tuesday, August 5, 2014 8:32:04 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following:
[irs] iscsi_default_ifaces = iser,default
I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically?
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
Thanks, - Trey _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Yaniv Bronhaim.

On 5-8-2014 20:54, Alon Bar-Lev wrote:
* ensure vdsm.conf exists * Populates /etc/vdsm/vdsm.id (IIRC a bug in previous ovirt required this) why? you machine should have valid bios uuid.
There are manufactures that don't set a unique bios uuid. Supermicro is one of them. I had a RFE but that has been closed but I feel that it should be reopened and implemented. The change is minimal and guarantees that there is a unique uuid. Regards, Joop

Yep, these are Supermicro servers. # facter serialnumber 1234567890 On Tue, Aug 5, 2014 at 2:07 PM, Joop <jvdwege@xs4all.nl> wrote:
On 5-8-2014 20:54, Alon Bar-Lev wrote:
* ensure vdsm.conf exists * Populates /etc/vdsm/vdsm.id (IIRC a bug in previous ovirt required this) why? you machine should have valid bios uuid.
There are manufactures that don't set a unique bios uuid. Supermicro is one of them. I had a RFE but that has been closed but I feel that it should be reopened and implemented. The change is minimal and guarantees that there is a unique uuid.
Regards,
Joop
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

It seems that you should be able to change it: http://www.supermicro.com/support/faqs/faq.cfm?faq=15869 Did not try that myself. Not sure it's better than simply managing vdsm.id manually. -- Didi ----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Joop" <jvdwege@xs4all.nl> Cc: "users" <users@ovirt.org> Sent: Tuesday, August 5, 2014 10:17:06 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Yep, these are Supermicro servers.
# facter serialnumber 1234567890
On Tue, Aug 5, 2014 at 2:07 PM, Joop <jvdwege@xs4all.nl> wrote:
On 5-8-2014 20:54, Alon Bar-Lev wrote:
* ensure vdsm.conf exists * Populates /etc/vdsm/vdsm.id (IIRC a bug in previous ovirt required this) why? you machine should have valid bios uuid.
There are manufactures that don't set a unique bios uuid. Supermicro is one of them. I had a RFE but that has been closed but I feel that it should be reopened and implemented. The change is minimal and guarantees that there is a unique uuid.
Regards,
Joop
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Thank you for rapid responses! I'll try to way a few minutes before responding to keep from spamming the list :) On Tue, Aug 5, 2014 at 1:54 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "ybronhei" <ybronhei@redhat.com> Sent: Tuesday, August 5, 2014 9:50:04 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
On Tue, Aug 5, 2014 at 12:37 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
Hi,
Do you actually use puppet over ovirt-node? This is unsupported.
Regards, Alon
I use Puppet to configure everything on the system and some of those things conflict with changes made by ovirt-engine when adding a node so I've moved to managing those changes in an ovirt Puppet module.
When you refer to "ovirt-node" your refering to the pre-built ISO images? AFAIK the ovirt-node is not an option for me as I use Infiniband to do all storage connections. Currently NFS is done using IPoIB (NFS over RDMA was crashing so I am not pursuing at the moment) and iSCSI is done using iSER. At some point in time I'd like to use IB interfaces in my guests utilizing SR-IOV which my Mellanox cards support, but I'm postponing that project till after our HPC cluster's upgraded :)
Right now the Puppet module does the following:
* Configures firewall rules just like what is done by ovirt (with exception of a few not supported by puppetlabs-firewall, so I override global directive that purges unknown firewall rules)
this should be done at engine side, using engine-config, seek IPTablesConfig or IPTablesConfigSiteCustom in 3.5.
I will look into this, thank you! I had seen other threads about this but since I apply my "firewall" module globally I thought it be "easier" to handle in Puppet, but if the oVirt support method is using "engine-config" then I'll switch to that and ensure the firewall is managed via oVirt for nodes.
** This is necessary as I have to add other Firewall rules, such as Zabbix * Exports /etc/hosts entry for the node that is collected by ovirt-engine host so that if DNS goes down ovirt-engine does not lose access to ovirtmgmt interfaces
why not just have a backup for dns?
This is somewhat complicated. My oVirt nodes have no IP address on my campus' public networks. They are physically connected to numerous public networks (one being on science DMZ) and for security reasons they only have IPs on my private LAN. My private LAN's DNS is managed by me as our campus DNS does not yet support a way for me to apply DNS changes to RFC1918 spaces via any form of API (foreman's smart-proxy). I do have two DNS VMs that can handle things. In the past I had only 1 VM doing DNS and when it went offline for maintenance ovirt-engine saw the ovirt-nodes as offline and tried to fence them due to name resolution being down. To remove DNS as a point-of-failure for oVirt I've decided to use /etc/hosts. It's a hack and one I'm comfortable with until I'm more comfortable with my DNS deployment :)
* Install yum repos for ovirt
this should not be done within ovirt-node.
* install vdsm
this should be done by host-deploy.
* ensure vdsm.conf exists * Populates /etc/vdsm/vdsm.id (IIRC a bug in previous ovirt required this)
why? you machine should have valid bios uuid.
IIRC it's because my servers BIOS serial numbers are all the same. They are all 0123456789. I vaguely recall running into an issue when EL6 was first supported where the vdsm.id file on my first two nodes ended up having identical contents and I believe it was due to the BIOS serial numbers being the same (it's something our vendor neglected I think, but before my time).
* Ensures vdsmd is running and will start at boot * Ensures vdsm sudo rules are present. * Manages default vdsm.conf configurations as a Puppet type, "vdsm_config", rather than managing file contents via template (allows for purging unmanaged entries also)
the above should be done via host-deploy.
A lot of the above is handled by ovirt already, but in the past customizations were not possible (firewall rules, vdsm.conf entries, etc) so if I was going to have to manage those separately I wanted them in Puppet :)
Now that I'm aware of "ovirt-host-deploy" and have seen the potential of using vdsm-reg in Puppet, I'm curious what is the "right" way to automate node deployment's in oVirt. Ideally I could still use Puppet to configure the method or fill in the "gaps" for customizations that are needed (ie enabling iSER).
I'd be glad to know what the recommended method for automating ovirt would be, and would be happy to refactor my module in hopes it would offer other Puppet users a quick-start way to deploy oVirt while still doing things the "ovirt way".
If the right way is using the ovirt-node images then I'd like to know what customizations are possible on those images and so on. oVirt is moving very rapidly and despite using ovirt for a long time I'm still learning new things about it almost daily, so forgive any of my assumptions above that may be wrong :).
ovirt-node is not flexible as standard host, if you require sssd and misc packages I suggest you stick with generic host.
That was one of my assumptions. The use of sssd is not entirely required but for simplicity sake I apply it across the board to all systems to maintain UID and GID consistency. If ovirt-node supports using InfiniBand and configuring an Infiniband with "CONNECTED_MODE=yes" (MTU of 65520) then that's really the main requirement. To support SR-IOV using Mellanox cards I may have to use the Mellanox OFED packages instead of upstream "Infiniband Support" packages, and if that's the case then I will have to use a generic host as something like Mellanox packages obviously can't be considered for support in ovirt-node. Thanks, - Trey
Thanks, - Trey
----- Original Message -----
From: "ybronhei" <ybronhei@redhat.com> To: "Trey Dockendorf" <treydock@gmail.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Alon Bar-Lev" <alonbl@redhat.com> Sent: Tuesday, August 5, 2014 8:32:04 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
> I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM > installed. I am using iSER to do iSCSI over RDMA and to make that > work I have to modify /etc/vdsm/vdsm.conf to include the following: > > [irs] > iscsi_default_ifaces = iser,default > > I've noticed that any time I upgrade a node from the engine web > interface that changes to vdsm.conf are wiped out. I don't know if > this is being done by the configuration code or by the vdsm package. > Is there a more reliable way to ensure changes to vdsm.conf are NOT > removed automatically? >
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
> > Thanks, > - Trey > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >
-- Yaniv Bronhaim.

On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> wrote:
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
I was unaware of that package. I will check that out as that seems to be what I am looking for. I have not filed this in BZ and will hold off pending ovirt-host-deploy. If you feel a BZ is still necessary then please do file one and I would be happy to provide input if it would help. Right now this is my workflow. 1. Foreman provisions bare-metal server with CentOS 6.5 2. Once provisioned and system rebooted Puppet applies puppet-ovirt [1] module that adds the necessary yum repos, and installs packages. Part of my Puppet deployment is basic things like sudo management (vdsm's sudo is account for), sssd configuration, and other aspects that are needed by every system in my infrastructure. Part of the ovirt::node Puppet class is managing vdsm.conf, and in my case that means ensuring iSER is enabled for iSCSI over IB. 3. Once host is online and has had the full Puppet catalog applied I log into ovirt-engine web interface and add those host (pulling it's data via the Foreman provider). What I've noticed is that after step #3, after a host is added by ovirt-engine, the vdsm.conf file is reset to default and I have to reapply Puppet before it can be used as the one of my Data Storage Domains requires iSER (not available over TCP). What would be the workflow using ovirt-host-deploy? Thus far I've had to piece together my workflow based on the documentation and filling in blanks where possible since I do require customizations to vdsm.conf and the documented workflow of adding a host via web UI does not allow for such customization. Thanks, - Trey [1] - https://github.com/treydock/puppet-ovirt (README not fully updated as still working out how to use Puppet with oVirt)
On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following:
[irs] iscsi_default_ifaces = iser,default
I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically?
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
Thanks, - Trey _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Yaniv Bronhaim.

----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "ybronhei" <ybronhei@redhat.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Alon Bar-Lev" <alonbl@redhat.com> Sent: Tuesday, August 5, 2014 9:36:24 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> wrote:
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
I was unaware of that package. I will check that out as that seems to be what I am looking for.
I have not filed this in BZ and will hold off pending ovirt-host-deploy. If you feel a BZ is still necessary then please do file one and I would be happy to provide input if it would help.
Right now this is my workflow.
1. Foreman provisions bare-metal server with CentOS 6.5 2. Once provisioned and system rebooted Puppet applies puppet-ovirt [1] module that adds the necessary yum repos
and should stop here..
, and installs packages. Part of my Puppet deployment is basic things like sudo management (vdsm's sudo is account for), sssd configuration, and other aspects that are needed by every system in my infrastructure. Part of the ovirt::node Puppet class is managing vdsm.conf, and in my case that means ensuring iSER is enabled for iSCSI over IB.
you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf --- VDSM_CONFIG/section/key=str:content --- this will create a proper vdsm.conf when host-deploy is initiated. you should now use the rest api to initiate host-deploy.
3. Once host is online and has had the full Puppet catalog applied I log into ovirt-engine web interface and add those host (pulling it's data via the Foreman provider).
right, but you should let this process install packages and manage configuration.
What I've noticed is that after step #3, after a host is added by ovirt-engine, the vdsm.conf file is reset to default and I have to reapply Puppet before it can be used as the one of my Data Storage Domains requires iSER (not available over TCP).
right, see above.
What would be the workflow using ovirt-host-deploy? Thus far I've had to piece together my workflow based on the documentation and filling in blanks where possible since I do require customizations to vdsm.conf and the documented workflow of adding a host via web UI does not allow for such customization.
Thanks, - Trey
[1] - https://github.com/treydock/puppet-ovirt (README not fully updated as still working out how to use Puppet with oVirt)
On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following:
[irs] iscsi_default_ifaces = iser,default
I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically?
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
Thanks, - Trey _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Yaniv Bronhaim.

Ah, thank you for the input! Just so I'm not spending time implementing the wrong changes, let me confirm I understand your comments. 1) Deploy host with Foreman 2) Apply Puppet catalog including ovirt Puppet module 3) Initiate host-deploy via rest API In the ovirt module the following takes place: 2a) Add yum repos 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf For #2b I have a few questions * The name of the ".conf" file is simply for sorting and labeling/organization, it has not functional impact on what those overrides apply to? * That file is managed on the ovirt-engine server, not the actual nodes? * Is there any way to apply overrides to specific hosts? For example if I have some hosts that require a config and others that don't, how would I separate those *.conf files? This is more theoretical as right now my setup is common across all nodes. For #3...the implementation of API calls from within Puppet is a challenge and one I can't tackle yet, but definitely will make it a goal for the future. In the mean time, what's the "manual" way to initiate host-deploy? Is there a CLI command that would have the same result as an API call or is the recommended way to perform the API call manually (ie curl)? Thanks! - Trey On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "ybronhei" <ybronhei@redhat.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Alon Bar-Lev" <alonbl@redhat.com> Sent: Tuesday, August 5, 2014 9:36:24 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> wrote:
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
I was unaware of that package. I will check that out as that seems to be what I am looking for.
I have not filed this in BZ and will hold off pending ovirt-host-deploy. If you feel a BZ is still necessary then please do file one and I would be happy to provide input if it would help.
Right now this is my workflow.
1. Foreman provisions bare-metal server with CentOS 6.5 2. Once provisioned and system rebooted Puppet applies puppet-ovirt [1] module that adds the necessary yum repos
and should stop here..
, and installs packages. Part of my Puppet deployment is basic things like sudo management (vdsm's sudo is account for), sssd configuration, and other aspects that are needed by every system in my infrastructure. Part of the ovirt::node Puppet class is managing vdsm.conf, and in my case that means ensuring iSER is enabled for iSCSI over IB.
you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf --- VDSM_CONFIG/section/key=str:content ---
this will create a proper vdsm.conf when host-deploy is initiated.
you should now use the rest api to initiate host-deploy.
3. Once host is online and has had the full Puppet catalog applied I log into ovirt-engine web interface and add those host (pulling it's data via the Foreman provider).
right, but you should let this process install packages and manage configuration.
What I've noticed is that after step #3, after a host is added by ovirt-engine, the vdsm.conf file is reset to default and I have to reapply Puppet before it can be used as the one of my Data Storage Domains requires iSER (not available over TCP).
right, see above.
What would be the workflow using ovirt-host-deploy? Thus far I've had to piece together my workflow based on the documentation and filling in blanks where possible since I do require customizations to vdsm.conf and the documented workflow of adding a host via web UI does not allow for such customization.
Thanks, - Trey
[1] - https://github.com/treydock/puppet-ovirt (README not fully updated as still working out how to use Puppet with oVirt)
On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following:
[irs] iscsi_default_ifaces = iser,default
I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically?
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
Thanks, - Trey _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Yaniv Bronhaim.

----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com> Sent: Tuesday, August 5, 2014 10:01:14 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Ah, thank you for the input! Just so I'm not spending time implementing the wrong changes, let me confirm I understand your comments.
1) Deploy host with Foreman 2) Apply Puppet catalog including ovirt Puppet module 3) Initiate host-deploy via rest API
In the ovirt module the following takes place:
2a) Add yum repos 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf
you can have any # of files with any prefix :))
For #2b I have a few questions
* The name of the ".conf" file is simply for sorting and labeling/organization, it has not functional impact on what those overrides apply to?
right.
* That file is managed on the ovirt-engine server, not the actual nodes?
currently on the host, in future we will provide a method to add this to engine database[1] [1] http://gerrit.ovirt.org/#/c/27064/
* Is there any way to apply overrides to specific hosts? For example if I have some hosts that require a config and others that don't, how would I separate those *.conf files? This is more theoretical as right now my setup is common across all nodes.
the poppet module can put whatever required on each host.
For #3...the implementation of API calls from within Puppet is a challenge and one I can't tackle yet, but definitely will make it a goal for the future. In the mean time, what's the "manual" way to initiate host-deploy? Is there a CLI command that would have the same result as an API call or is the recommended way to perform the API call manually (ie curl)?
well, you can register host using the following protocol[1], but it is difficult to do this securely, what you actually need is to establish ssh trust for root with engine key then register. you can also use the register command using curl by something like (I have not checked): https://admin%40internal:password@engine/ovirt-engine/api/hosts --- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <host> <name>host1</name> <address>dns</address> <ssh> <authentication_method>publickey</authentication_method> </ssh> <cluster id="cluster-uuid"/> </host> --- you can also use the ovirt-engine-sdk-python package: --- import ovirtsdk.api import ovirtsdk.xml sdk = ovirtsdk.api.API( url='https://host/ovirt-engine/api', username='admin@internal', password='password', insecure=True, ) sdk.hosts.add( ovirtsdk.xml.params.Host( name='host1', address='host1', cluster=engine_api.clusters.get( 'cluster' ), ssh=self._ovirtsdk_xml.params.SSH( authentication_method='publickey', ), ) ) --- [1] http://www.ovirt.org/Features/HostDeployProtocol
Thanks! - Trey
On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "ybronhei" <ybronhei@redhat.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Alon Bar-Lev" <alonbl@redhat.com> Sent: Tuesday, August 5, 2014 9:36:24 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> wrote:
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
I was unaware of that package. I will check that out as that seems to be what I am looking for.
I have not filed this in BZ and will hold off pending ovirt-host-deploy. If you feel a BZ is still necessary then please do file one and I would be happy to provide input if it would help.
Right now this is my workflow.
1. Foreman provisions bare-metal server with CentOS 6.5 2. Once provisioned and system rebooted Puppet applies puppet-ovirt [1] module that adds the necessary yum repos
and should stop here..
, and installs packages. Part of my Puppet deployment is basic things like sudo management (vdsm's sudo is account for), sssd configuration, and other aspects that are needed by every system in my infrastructure. Part of the ovirt::node Puppet class is managing vdsm.conf, and in my case that means ensuring iSER is enabled for iSCSI over IB.
you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf --- VDSM_CONFIG/section/key=str:content ---
this will create a proper vdsm.conf when host-deploy is initiated.
you should now use the rest api to initiate host-deploy.
3. Once host is online and has had the full Puppet catalog applied I log into ovirt-engine web interface and add those host (pulling it's data via the Foreman provider).
right, but you should let this process install packages and manage configuration.
What I've noticed is that after step #3, after a host is added by ovirt-engine, the vdsm.conf file is reset to default and I have to reapply Puppet before it can be used as the one of my Data Storage Domains requires iSER (not available over TCP).
right, see above.
What would be the workflow using ovirt-host-deploy? Thus far I've had to piece together my workflow based on the documentation and filling in blanks where possible since I do require customizations to vdsm.conf and the documented workflow of adding a host via web UI does not allow for such customization.
Thanks, - Trey
[1] - https://github.com/treydock/puppet-ovirt (README not fully updated as still working out how to use Puppet with oVirt)
On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
> I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM > installed. I am using iSER to do iSCSI over RDMA and to make that > work I have to modify /etc/vdsm/vdsm.conf to include the following: > > [irs] > iscsi_default_ifaces = iser,default > > I've noticed that any time I upgrade a node from the engine web > interface that changes to vdsm.conf are wiped out. I don't know if > this is being done by the configuration code or by the vdsm package. > Is there a more reliable way to ensure changes to vdsm.conf are NOT > removed automatically? >
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
> > Thanks, > - Trey > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >
-- Yaniv Bronhaim.

Excellent, so installing 'ovirt-host-deploy' on each node then configuring the /etc/ovirt-host-deploy.conf.d files seems very automate-able, will see how it works in practice. Regarding the actual host registration and getting the host added to ovirt-engine, are there other methods besides the API and the sdk? Would it be possible to configure the necessary ovirt-host-deploy.conf.d files then execute "ovirt-host-deploy"? I notice that running 'ovirt-host-deploy' wants to make whatever host executes it a ovir hypervisor but haven't yet run it all the way through as no server to test with at this time. There seems to be no "--help" or similar command line argument. I'm sure this will all be more clear once I attempt the steps and run through the motions. Will try to find a system to test on so I'm ready once our new servers arrive. Thanks, - Trey On Tue, Aug 5, 2014 at 2:23 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com> Sent: Tuesday, August 5, 2014 10:01:14 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Ah, thank you for the input! Just so I'm not spending time implementing the wrong changes, let me confirm I understand your comments.
1) Deploy host with Foreman 2) Apply Puppet catalog including ovirt Puppet module 3) Initiate host-deploy via rest API
In the ovirt module the following takes place:
2a) Add yum repos 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf
you can have any # of files with any prefix :))
For #2b I have a few questions
* The name of the ".conf" file is simply for sorting and labeling/organization, it has not functional impact on what those overrides apply to?
right.
* That file is managed on the ovirt-engine server, not the actual nodes?
currently on the host, in future we will provide a method to add this to engine database[1]
[1] http://gerrit.ovirt.org/#/c/27064/
* Is there any way to apply overrides to specific hosts? For example if I have some hosts that require a config and others that don't, how would I separate those *.conf files? This is more theoretical as right now my setup is common across all nodes.
the poppet module can put whatever required on each host.
For #3...the implementation of API calls from within Puppet is a challenge and one I can't tackle yet, but definitely will make it a goal for the future. In the mean time, what's the "manual" way to initiate host-deploy? Is there a CLI command that would have the same result as an API call or is the recommended way to perform the API call manually (ie curl)?
well, you can register host using the following protocol[1], but it is difficult to do this securely, what you actually need is to establish ssh trust for root with engine key then register.
you can also use the register command using curl by something like (I have not checked): https://admin%40internal:password@engine/ovirt-engine/api/hosts --- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <host> <name>host1</name> <address>dns</address> <ssh> <authentication_method>publickey</authentication_method> </ssh> <cluster id="cluster-uuid"/> </host> ---
you can also use the ovirt-engine-sdk-python package: --- import ovirtsdk.api import ovirtsdk.xml
sdk = ovirtsdk.api.API( url='https://host/ovirt-engine/api', username='admin@internal', password='password', insecure=True, ) sdk.hosts.add( ovirtsdk.xml.params.Host( name='host1', address='host1', cluster=engine_api.clusters.get( 'cluster' ), ssh=self._ovirtsdk_xml.params.SSH( authentication_method='publickey', ), ) ) ---
[1] http://www.ovirt.org/Features/HostDeployProtocol
Thanks! - Trey
On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "ybronhei" <ybronhei@redhat.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Alon Bar-Lev" <alonbl@redhat.com> Sent: Tuesday, August 5, 2014 9:36:24 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> wrote:
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
I was unaware of that package. I will check that out as that seems to be what I am looking for.
I have not filed this in BZ and will hold off pending ovirt-host-deploy. If you feel a BZ is still necessary then please do file one and I would be happy to provide input if it would help.
Right now this is my workflow.
1. Foreman provisions bare-metal server with CentOS 6.5 2. Once provisioned and system rebooted Puppet applies puppet-ovirt [1] module that adds the necessary yum repos
and should stop here..
, and installs packages. Part of my Puppet deployment is basic things like sudo management (vdsm's sudo is account for), sssd configuration, and other aspects that are needed by every system in my infrastructure. Part of the ovirt::node Puppet class is managing vdsm.conf, and in my case that means ensuring iSER is enabled for iSCSI over IB.
you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf --- VDSM_CONFIG/section/key=str:content ---
this will create a proper vdsm.conf when host-deploy is initiated.
you should now use the rest api to initiate host-deploy.
3. Once host is online and has had the full Puppet catalog applied I log into ovirt-engine web interface and add those host (pulling it's data via the Foreman provider).
right, but you should let this process install packages and manage configuration.
What I've noticed is that after step #3, after a host is added by ovirt-engine, the vdsm.conf file is reset to default and I have to reapply Puppet before it can be used as the one of my Data Storage Domains requires iSER (not available over TCP).
right, see above.
What would be the workflow using ovirt-host-deploy? Thus far I've had to piece together my workflow based on the documentation and filling in blanks where possible since I do require customizations to vdsm.conf and the documented workflow of adding a host via web UI does not allow for such customization.
Thanks, - Trey
[1] - https://github.com/treydock/puppet-ovirt (README not fully updated as still working out how to use Puppet with oVirt)
On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
> On 07/31/2014 01:28 AM, Trey Dockendorf wrote: > >> I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM >> installed. I am using iSER to do iSCSI over RDMA and to make that >> work I have to modify /etc/vdsm/vdsm.conf to include the following: >> >> [irs] >> iscsi_default_ifaces = iser,default >> >> I've noticed that any time I upgrade a node from the engine web >> interface that changes to vdsm.conf are wiped out. I don't know if >> this is being done by the configuration code or by the vdsm package. >> Is there a more reliable way to ensure changes to vdsm.conf are NOT >> removed automatically? >> > > Hey, > > vdsm.conf shouldn't wiped out and shouldn't changed at all during > upgrade. > other related conf files (such as libvirtd.conf) might be overrided to > keep > defaults configurations for vdsm. but vdsm.conf should persist with > user's > modification. from my check, regular yum upgrade doesn't touch > vdsm.conf > > Douglas can you verify that with node upgrade? might be specific to > that > flow.. > > Trey, can file a bugzilla on that and describe your steps there? > > Thanks > > Yaniv Bronhaim, > >> >> Thanks, >> - Trey >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > -- > Yaniv Bronhaim. >

----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 10:45:12 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Excellent, so installing 'ovirt-host-deploy' on each node then configuring the /etc/ovirt-host-deploy.conf.d files seems very automate-able, will see how it works in practice.
you do not need to install the ovirt-host-deploy, just create the files.
Regarding the actual host registration and getting the host added to ovirt-engine, are there other methods besides the API and the sdk? Would it be possible to configure the necessary ovirt-host-deploy.conf.d files then execute "ovirt-host-deploy"? I notice that running 'ovirt-host-deploy' wants to make whatever host executes it a ovir hypervisor but haven't yet run it all the way through as no server to test with at this time. There seems to be no "--help" or similar command line argument.
you should not run host-deploy directly, but via the engine's process, either registration or add host as I replied previously. when base system is ready, you issue add host via api of engine or via ui, the other alternative is to register the host host the host-deploy protocol, and approve the host via api of engine or via ui.
I'm sure this will all be more clear once I attempt the steps and run through the motions. Will try to find a system to test on so I'm ready once our new servers arrive.
Thanks, - Trey
On Tue, Aug 5, 2014 at 2:23 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com> Sent: Tuesday, August 5, 2014 10:01:14 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Ah, thank you for the input! Just so I'm not spending time implementing the wrong changes, let me confirm I understand your comments.
1) Deploy host with Foreman 2) Apply Puppet catalog including ovirt Puppet module 3) Initiate host-deploy via rest API
In the ovirt module the following takes place:
2a) Add yum repos 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf
you can have any # of files with any prefix :))
For #2b I have a few questions
* The name of the ".conf" file is simply for sorting and labeling/organization, it has not functional impact on what those overrides apply to?
right.
* That file is managed on the ovirt-engine server, not the actual nodes?
currently on the host, in future we will provide a method to add this to engine database[1]
[1] http://gerrit.ovirt.org/#/c/27064/
* Is there any way to apply overrides to specific hosts? For example if I have some hosts that require a config and others that don't, how would I separate those *.conf files? This is more theoretical as right now my setup is common across all nodes.
the poppet module can put whatever required on each host.
For #3...the implementation of API calls from within Puppet is a challenge and one I can't tackle yet, but definitely will make it a goal for the future. In the mean time, what's the "manual" way to initiate host-deploy? Is there a CLI command that would have the same result as an API call or is the recommended way to perform the API call manually (ie curl)?
well, you can register host using the following protocol[1], but it is difficult to do this securely, what you actually need is to establish ssh trust for root with engine key then register.
you can also use the register command using curl by something like (I have not checked): https://admin%40internal:password@engine/ovirt-engine/api/hosts --- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <host> <name>host1</name> <address>dns</address> <ssh> <authentication_method>publickey</authentication_method> </ssh> <cluster id="cluster-uuid"/> </host> ---
you can also use the ovirt-engine-sdk-python package: --- import ovirtsdk.api import ovirtsdk.xml
sdk = ovirtsdk.api.API( url='https://host/ovirt-engine/api', username='admin@internal', password='password', insecure=True, ) sdk.hosts.add( ovirtsdk.xml.params.Host( name='host1', address='host1', cluster=engine_api.clusters.get( 'cluster' ), ssh=self._ovirtsdk_xml.params.SSH( authentication_method='publickey', ), ) ) ---
[1] http://www.ovirt.org/Features/HostDeployProtocol
Thanks! - Trey
On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "ybronhei" <ybronhei@redhat.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Alon Bar-Lev" <alonbl@redhat.com> Sent: Tuesday, August 5, 2014 9:36:24 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> wrote:
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
I was unaware of that package. I will check that out as that seems to be what I am looking for.
I have not filed this in BZ and will hold off pending ovirt-host-deploy. If you feel a BZ is still necessary then please do file one and I would be happy to provide input if it would help.
Right now this is my workflow.
1. Foreman provisions bare-metal server with CentOS 6.5 2. Once provisioned and system rebooted Puppet applies puppet-ovirt [1] module that adds the necessary yum repos
and should stop here..
, and installs packages. Part of my Puppet deployment is basic things like sudo management (vdsm's sudo is account for), sssd configuration, and other aspects that are needed by every system in my infrastructure. Part of the ovirt::node Puppet class is managing vdsm.conf, and in my case that means ensuring iSER is enabled for iSCSI over IB.
you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf --- VDSM_CONFIG/section/key=str:content ---
this will create a proper vdsm.conf when host-deploy is initiated.
you should now use the rest api to initiate host-deploy.
3. Once host is online and has had the full Puppet catalog applied I log into ovirt-engine web interface and add those host (pulling it's data via the Foreman provider).
right, but you should let this process install packages and manage configuration.
What I've noticed is that after step #3, after a host is added by ovirt-engine, the vdsm.conf file is reset to default and I have to reapply Puppet before it can be used as the one of my Data Storage Domains requires iSER (not available over TCP).
right, see above.
What would be the workflow using ovirt-host-deploy? Thus far I've had to piece together my workflow based on the documentation and filling in blanks where possible since I do require customizations to vdsm.conf and the documented workflow of adding a host via web UI does not allow for such customization.
Thanks, - Trey
[1] - https://github.com/treydock/puppet-ovirt (README not fully updated as still working out how to use Puppet with oVirt)
On 08/05/2014 08:12 AM, Trey Dockendorf wrote: > > I'll file BZ. As far as I can recall this has been an issue since > 3.3.x > as > I have been using Puppet to modify values and have had to rerun > Puppet > after installing a node via GUI and when performing update from GUI. > Given > that it has occurred when VDSM version didn't change on the node it > seems > likely to be something being done by Python code that bootstraps a > node > and > performs the other tasks. I won't have any systems available to > test > with > for a few days. New hardware specifically for our oVirt deployment > is > on > order so should be able to more thoroughly debug and capture logs at > that > time. > > Would using vdsm-reg be a better solution for adding new nodes? I > only > tried using vdsm-reg once and it went very poorly...lots of missing > dependencies not pulled in from yum install I had to install > manually > via > yum. Then the node was auto added to newest cluster with no ability > to > change the cluster. Be happy to debug that too if there's some docs > that > outline the expected behavior. > > Using vdsm-reg or something similar seems like a better fit for > puppet > deployed nodes, as opposed to requiring GUI steps to add the node. > > Thanks > - Trey > On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote: > >> On 07/31/2014 01:28 AM, Trey Dockendorf wrote: >> >>> I'm running ovirt nodes that are stock CentOS 6.5 systems with >>> VDSM >>> installed. I am using iSER to do iSCSI over RDMA and to make that >>> work I have to modify /etc/vdsm/vdsm.conf to include the >>> following: >>> >>> [irs] >>> iscsi_default_ifaces = iser,default >>> >>> I've noticed that any time I upgrade a node from the engine web >>> interface that changes to vdsm.conf are wiped out. I don't know >>> if >>> this is being done by the configuration code or by the vdsm >>> package. >>> Is there a more reliable way to ensure changes to vdsm.conf are >>> NOT >>> removed automatically? >>> >> >> Hey, >> >> vdsm.conf shouldn't wiped out and shouldn't changed at all during >> upgrade. >> other related conf files (such as libvirtd.conf) might be overrided >> to >> keep >> defaults configurations for vdsm. but vdsm.conf should persist with >> user's >> modification. from my check, regular yum upgrade doesn't touch >> vdsm.conf >> >> Douglas can you verify that with node upgrade? might be specific to >> that >> flow.. >> >> Trey, can file a bugzilla on that and describe your steps there? >> >> Thanks >> >> Yaniv Bronhaim, >> >>> >>> Thanks, >>> - Trey >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> -- >> Yaniv Bronhaim. >> >

Thanks for clarifying, makes sense now. The public key trust needed for registration, is that the same key that would be used when adding host via UI? Any examples of how to use the HostDeployProtocol [1]? I like the idea of using registration but haven't the slightest idea how to implement what's described in the docs [1]. I do recall seeing an article posted (searching email and can't find) that had a nice walk-through of how to use the oVirt API using browser tools. I'm unsure if this HostDeployProtocol would be done that way or via some other method. Thanks, - Trey [1] http://www.ovirt.org/Features/HostDeployProtocol On Tue, Aug 5, 2014 at 3:01 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 10:45:12 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Excellent, so installing 'ovirt-host-deploy' on each node then configuring the /etc/ovirt-host-deploy.conf.d files seems very automate-able, will see how it works in practice.
you do not need to install the ovirt-host-deploy, just create the files.
Regarding the actual host registration and getting the host added to ovirt-engine, are there other methods besides the API and the sdk? Would it be possible to configure the necessary ovirt-host-deploy.conf.d files then execute "ovirt-host-deploy"? I notice that running 'ovirt-host-deploy' wants to make whatever host executes it a ovir hypervisor but haven't yet run it all the way through as no server to test with at this time. There seems to be no "--help" or similar command line argument.
you should not run host-deploy directly, but via the engine's process, either registration or add host as I replied previously.
when base system is ready, you issue add host via api of engine or via ui, the other alternative is to register the host host the host-deploy protocol, and approve the host via api of engine or via ui.
I'm sure this will all be more clear once I attempt the steps and run through the motions. Will try to find a system to test on so I'm ready once our new servers arrive.
Thanks, - Trey
On Tue, Aug 5, 2014 at 2:23 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com> Sent: Tuesday, August 5, 2014 10:01:14 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Ah, thank you for the input! Just so I'm not spending time implementing the wrong changes, let me confirm I understand your comments.
1) Deploy host with Foreman 2) Apply Puppet catalog including ovirt Puppet module 3) Initiate host-deploy via rest API
In the ovirt module the following takes place:
2a) Add yum repos 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf
you can have any # of files with any prefix :))
For #2b I have a few questions
* The name of the ".conf" file is simply for sorting and labeling/organization, it has not functional impact on what those overrides apply to?
right.
* That file is managed on the ovirt-engine server, not the actual nodes?
currently on the host, in future we will provide a method to add this to engine database[1]
[1] http://gerrit.ovirt.org/#/c/27064/
* Is there any way to apply overrides to specific hosts? For example if I have some hosts that require a config and others that don't, how would I separate those *.conf files? This is more theoretical as right now my setup is common across all nodes.
the poppet module can put whatever required on each host.
For #3...the implementation of API calls from within Puppet is a challenge and one I can't tackle yet, but definitely will make it a goal for the future. In the mean time, what's the "manual" way to initiate host-deploy? Is there a CLI command that would have the same result as an API call or is the recommended way to perform the API call manually (ie curl)?
well, you can register host using the following protocol[1], but it is difficult to do this securely, what you actually need is to establish ssh trust for root with engine key then register.
you can also use the register command using curl by something like (I have not checked): https://admin%40internal:password@engine/ovirt-engine/api/hosts --- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <host> <name>host1</name> <address>dns</address> <ssh> <authentication_method>publickey</authentication_method> </ssh> <cluster id="cluster-uuid"/> </host> ---
you can also use the ovirt-engine-sdk-python package: --- import ovirtsdk.api import ovirtsdk.xml
sdk = ovirtsdk.api.API( url='https://host/ovirt-engine/api', username='admin@internal', password='password', insecure=True, ) sdk.hosts.add( ovirtsdk.xml.params.Host( name='host1', address='host1', cluster=engine_api.clusters.get( 'cluster' ), ssh=self._ovirtsdk_xml.params.SSH( authentication_method='publickey', ), ) ) ---
[1] http://www.ovirt.org/Features/HostDeployProtocol
Thanks! - Trey
On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "ybronhei" <ybronhei@redhat.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Alon Bar-Lev" <alonbl@redhat.com> Sent: Tuesday, August 5, 2014 9:36:24 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> wrote: > Hey, > > Just noticed something that I forgot about.. > before filing new BZ, see in ovirt-host-deploy README.environment [1] > the > section: > VDSM/configOverride(bool) [True] > Override vdsm configuration file. > > changing it to false will keep your vdsm.conf file as is after > deploying > the > host again (what happens after node upgrade) > > [1] > https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment > > please check if that what you meant.. > > Thanks, > Yaniv Bronhaim. >
I was unaware of that package. I will check that out as that seems to be what I am looking for.
I have not filed this in BZ and will hold off pending ovirt-host-deploy. If you feel a BZ is still necessary then please do file one and I would be happy to provide input if it would help.
Right now this is my workflow.
1. Foreman provisions bare-metal server with CentOS 6.5 2. Once provisioned and system rebooted Puppet applies puppet-ovirt [1] module that adds the necessary yum repos
and should stop here..
, and installs packages. Part of my Puppet deployment is basic things like sudo management (vdsm's sudo is account for), sssd configuration, and other aspects that are needed by every system in my infrastructure. Part of the ovirt::node Puppet class is managing vdsm.conf, and in my case that means ensuring iSER is enabled for iSCSI over IB.
you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf --- VDSM_CONFIG/section/key=str:content ---
this will create a proper vdsm.conf when host-deploy is initiated.
you should now use the rest api to initiate host-deploy.
3. Once host is online and has had the full Puppet catalog applied I log into ovirt-engine web interface and add those host (pulling it's data via the Foreman provider).
right, but you should let this process install packages and manage configuration.
What I've noticed is that after step #3, after a host is added by ovirt-engine, the vdsm.conf file is reset to default and I have to reapply Puppet before it can be used as the one of my Data Storage Domains requires iSER (not available over TCP).
right, see above.
What would be the workflow using ovirt-host-deploy? Thus far I've had to piece together my workflow based on the documentation and filling in blanks where possible since I do require customizations to vdsm.conf and the documented workflow of adding a host via web UI does not allow for such customization.
Thanks, - Trey
[1] - https://github.com/treydock/puppet-ovirt (README not fully updated as still working out how to use Puppet with oVirt)
> > On 08/05/2014 08:12 AM, Trey Dockendorf wrote: >> >> I'll file BZ. As far as I can recall this has been an issue since >> 3.3.x >> as >> I have been using Puppet to modify values and have had to rerun >> Puppet >> after installing a node via GUI and when performing update from GUI. >> Given >> that it has occurred when VDSM version didn't change on the node it >> seems >> likely to be something being done by Python code that bootstraps a >> node >> and >> performs the other tasks. I won't have any systems available to >> test >> with >> for a few days. New hardware specifically for our oVirt deployment >> is >> on >> order so should be able to more thoroughly debug and capture logs at >> that >> time. >> >> Would using vdsm-reg be a better solution for adding new nodes? I >> only >> tried using vdsm-reg once and it went very poorly...lots of missing >> dependencies not pulled in from yum install I had to install >> manually >> via >> yum. Then the node was auto added to newest cluster with no ability >> to >> change the cluster. Be happy to debug that too if there's some docs >> that >> outline the expected behavior. >> >> Using vdsm-reg or something similar seems like a better fit for >> puppet >> deployed nodes, as opposed to requiring GUI steps to add the node. >> >> Thanks >> - Trey >> On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote: >> >>> On 07/31/2014 01:28 AM, Trey Dockendorf wrote: >>> >>>> I'm running ovirt nodes that are stock CentOS 6.5 systems with >>>> VDSM >>>> installed. I am using iSER to do iSCSI over RDMA and to make that >>>> work I have to modify /etc/vdsm/vdsm.conf to include the >>>> following: >>>> >>>> [irs] >>>> iscsi_default_ifaces = iser,default >>>> >>>> I've noticed that any time I upgrade a node from the engine web >>>> interface that changes to vdsm.conf are wiped out. I don't know >>>> if >>>> this is being done by the configuration code or by the vdsm >>>> package. >>>> Is there a more reliable way to ensure changes to vdsm.conf are >>>> NOT >>>> removed automatically? >>>> >>> >>> Hey, >>> >>> vdsm.conf shouldn't wiped out and shouldn't changed at all during >>> upgrade. >>> other related conf files (such as libvirtd.conf) might be overrided >>> to >>> keep >>> defaults configurations for vdsm. but vdsm.conf should persist with >>> user's >>> modification. from my check, regular yum upgrade doesn't touch >>> vdsm.conf >>> >>> Douglas can you verify that with node upgrade? might be specific to >>> that >>> flow.. >>> >>> Trey, can file a bugzilla on that and describe your steps there? >>> >>> Thanks >>> >>> Yaniv Bronhaim, >>> >>>> >>>> Thanks, >>>> - Trey >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >>> -- >>> Yaniv Bronhaim. >>> >> >

----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 11:27:45 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Thanks for clarifying, makes sense now.
The public key trust needed for registration, is that the same key that would be used when adding host via UI?
yes. you can download it via: $ curl 'http://engine/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY' $ curl 'http://engine/ovirt-engine/services/host-register?version=1&command=get-ssh-trust' probably better to use https and verify CA certificate fingerprint if you do that from host.
Any examples of how to use the HostDeployProtocol [1]? I like the idea of using registration but haven't the slightest idea how to implement what's described in the docs [1]. I do recall seeing an article posted (searching email and can't find) that had a nice walk-through of how to use the oVirt API using browser tools. I'm unsure if this HostDeployProtocol would be done that way or via some other method.
there are two apis, the formal rest-api that is exposed by the engine and can be accessed using any rest api tool or ovirt-engine-cli, ovirt-engine-sdk-java, ovirt-engine-sdk-python wrappers. I sent you a minimal example in previous message. and the host-deploy protocol[1], which should have been exposed in the rest-api, but for some reason I cannot understand it was not included in the public interface of the engine. the advantage of using the rest-api is that you can achieve full cycle using the protocol, the add host cycle is what you seek. the host-deploy protocol just register the host, but the sysadmin needs to approve the host via the ui (or via the rest api) before it is usable.
Thanks, - Trey
[1] http://www.ovirt.org/Features/HostDeployProtocol
On Tue, Aug 5, 2014 at 3:01 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 10:45:12 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Excellent, so installing 'ovirt-host-deploy' on each node then configuring the /etc/ovirt-host-deploy.conf.d files seems very automate-able, will see how it works in practice.
you do not need to install the ovirt-host-deploy, just create the files.
Regarding the actual host registration and getting the host added to ovirt-engine, are there other methods besides the API and the sdk? Would it be possible to configure the necessary ovirt-host-deploy.conf.d files then execute "ovirt-host-deploy"? I notice that running 'ovirt-host-deploy' wants to make whatever host executes it a ovir hypervisor but haven't yet run it all the way through as no server to test with at this time. There seems to be no "--help" or similar command line argument.
you should not run host-deploy directly, but via the engine's process, either registration or add host as I replied previously.
when base system is ready, you issue add host via api of engine or via ui, the other alternative is to register the host host the host-deploy protocol, and approve the host via api of engine or via ui.
I'm sure this will all be more clear once I attempt the steps and run through the motions. Will try to find a system to test on so I'm ready once our new servers arrive.
Thanks, - Trey
On Tue, Aug 5, 2014 at 2:23 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com> Sent: Tuesday, August 5, 2014 10:01:14 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Ah, thank you for the input! Just so I'm not spending time implementing the wrong changes, let me confirm I understand your comments.
1) Deploy host with Foreman 2) Apply Puppet catalog including ovirt Puppet module 3) Initiate host-deploy via rest API
In the ovirt module the following takes place:
2a) Add yum repos 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf
you can have any # of files with any prefix :))
For #2b I have a few questions
* The name of the ".conf" file is simply for sorting and labeling/organization, it has not functional impact on what those overrides apply to?
right.
* That file is managed on the ovirt-engine server, not the actual nodes?
currently on the host, in future we will provide a method to add this to engine database[1]
[1] http://gerrit.ovirt.org/#/c/27064/
* Is there any way to apply overrides to specific hosts? For example if I have some hosts that require a config and others that don't, how would I separate those *.conf files? This is more theoretical as right now my setup is common across all nodes.
the poppet module can put whatever required on each host.
For #3...the implementation of API calls from within Puppet is a challenge and one I can't tackle yet, but definitely will make it a goal for the future. In the mean time, what's the "manual" way to initiate host-deploy? Is there a CLI command that would have the same result as an API call or is the recommended way to perform the API call manually (ie curl)?
well, you can register host using the following protocol[1], but it is difficult to do this securely, what you actually need is to establish ssh trust for root with engine key then register.
you can also use the register command using curl by something like (I have not checked): https://admin%40internal:password@engine/ovirt-engine/api/hosts --- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <host> <name>host1</name> <address>dns</address> <ssh> <authentication_method>publickey</authentication_method> </ssh> <cluster id="cluster-uuid"/> </host> ---
you can also use the ovirt-engine-sdk-python package: --- import ovirtsdk.api import ovirtsdk.xml
sdk = ovirtsdk.api.API( url='https://host/ovirt-engine/api', username='admin@internal', password='password', insecure=True, ) sdk.hosts.add( ovirtsdk.xml.params.Host( name='host1', address='host1', cluster=engine_api.clusters.get( 'cluster' ), ssh=self._ovirtsdk_xml.params.SSH( authentication_method='publickey', ), ) ) ---
[1] http://www.ovirt.org/Features/HostDeployProtocol
Thanks! - Trey
On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message ----- > From: "Trey Dockendorf" <treydock@gmail.com> > To: "ybronhei" <ybronhei@redhat.com> > Cc: "users" <users@ovirt.org>, "Fabian Deutsch" > <fabiand@redhat.com>, > "Dan > Kenigsberg" <danken@redhat.com>, "Itamar > Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, > "Alon > Bar-Lev" <alonbl@redhat.com> > Sent: Tuesday, August 5, 2014 9:36:24 PM > Subject: Re: [ovirt-users] Proper way to change and persist vdsm > configuration options > > On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> > wrote: > > Hey, > > > > Just noticed something that I forgot about.. > > before filing new BZ, see in ovirt-host-deploy README.environment > > [1] > > the > > section: > > VDSM/configOverride(bool) [True] > > Override vdsm configuration file. > > > > changing it to false will keep your vdsm.conf file as is after > > deploying > > the > > host again (what happens after node upgrade) > > > > [1] > > https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment > > > > please check if that what you meant.. > > > > Thanks, > > Yaniv Bronhaim. > > > > I was unaware of that package. I will check that out as that seems > to > be what I am looking for. > > I have not filed this in BZ and will hold off pending > ovirt-host-deploy. If you feel a BZ is still necessary then please > do > file one and I would be happy to provide input if it would help. > > Right now this is my workflow. > > 1. Foreman provisions bare-metal server with CentOS 6.5 > 2. Once provisioned and system rebooted Puppet applies puppet-ovirt > [1] module that adds the necessary yum repos
and should stop here..
> , and installs packages. > Part of my Puppet deployment is basic things like sudo management > (vdsm's sudo is account for), sssd configuration, and other aspects > that are needed by every system in my infrastructure. Part of the > ovirt::node Puppet class is managing vdsm.conf, and in my case that > means ensuring iSER is enabled for iSCSI over IB.
you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf --- VDSM_CONFIG/section/key=str:content ---
this will create a proper vdsm.conf when host-deploy is initiated.
you should now use the rest api to initiate host-deploy.
> 3. Once host is online and has had the full Puppet catalog applied I > log into ovirt-engine web interface and add those host (pulling it's > data via the Foreman provider).
right, but you should let this process install packages and manage configuration.
> What I've noticed is that after step #3, after a host is added by > ovirt-engine, the vdsm.conf file is reset to default and I have to > reapply Puppet before it can be used as the one of my Data Storage > Domains requires iSER (not available over TCP).
right, see above.
> What would be the workflow using ovirt-host-deploy? Thus far I've > had > to piece together my workflow based on the documentation and filling > in blanks where possible since I do require customizations to > vdsm.conf and the documented workflow of adding a host via web UI > does > not allow for such customization. > > Thanks, > - Trey > > [1] - https://github.com/treydock/puppet-ovirt (README not fully > updated as still working out how to use Puppet with oVirt) > > > > > On 08/05/2014 08:12 AM, Trey Dockendorf wrote: > >> > >> I'll file BZ. As far as I can recall this has been an issue > >> since > >> 3.3.x > >> as > >> I have been using Puppet to modify values and have had to rerun > >> Puppet > >> after installing a node via GUI and when performing update from > >> GUI. > >> Given > >> that it has occurred when VDSM version didn't change on the node > >> it > >> seems > >> likely to be something being done by Python code that bootstraps > >> a > >> node > >> and > >> performs the other tasks. I won't have any systems available to > >> test > >> with > >> for a few days. New hardware specifically for our oVirt > >> deployment > >> is > >> on > >> order so should be able to more thoroughly debug and capture logs > >> at > >> that > >> time. > >> > >> Would using vdsm-reg be a better solution for adding new nodes? > >> I > >> only > >> tried using vdsm-reg once and it went very poorly...lots of > >> missing > >> dependencies not pulled in from yum install I had to install > >> manually > >> via > >> yum. Then the node was auto added to newest cluster with no > >> ability > >> to > >> change the cluster. Be happy to debug that too if there's some > >> docs > >> that > >> outline the expected behavior. > >> > >> Using vdsm-reg or something similar seems like a better fit for > >> puppet > >> deployed nodes, as opposed to requiring GUI steps to add the > >> node. > >> > >> Thanks > >> - Trey > >> On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote: > >> > >>> On 07/31/2014 01:28 AM, Trey Dockendorf wrote: > >>> > >>>> I'm running ovirt nodes that are stock CentOS 6.5 systems with > >>>> VDSM > >>>> installed. I am using iSER to do iSCSI over RDMA and to make > >>>> that > >>>> work I have to modify /etc/vdsm/vdsm.conf to include the > >>>> following: > >>>> > >>>> [irs] > >>>> iscsi_default_ifaces = iser,default > >>>> > >>>> I've noticed that any time I upgrade a node from the engine web > >>>> interface that changes to vdsm.conf are wiped out. I don't > >>>> know > >>>> if > >>>> this is being done by the configuration code or by the vdsm > >>>> package. > >>>> Is there a more reliable way to ensure changes to vdsm.conf are > >>>> NOT > >>>> removed automatically? > >>>> > >>> > >>> Hey, > >>> > >>> vdsm.conf shouldn't wiped out and shouldn't changed at all > >>> during > >>> upgrade. > >>> other related conf files (such as libvirtd.conf) might be > >>> overrided > >>> to > >>> keep > >>> defaults configurations for vdsm. but vdsm.conf should persist > >>> with > >>> user's > >>> modification. from my check, regular yum upgrade doesn't touch > >>> vdsm.conf > >>> > >>> Douglas can you verify that with node upgrade? might be specific > >>> to > >>> that > >>> flow.. > >>> > >>> Trey, can file a bugzilla on that and describe your steps there? > >>> > >>> Thanks > >>> > >>> Yaniv Bronhaim, > >>> > >>>> > >>>> Thanks, > >>>> - Trey > >>>> _______________________________________________ > >>>> Users mailing list > >>>> Users@ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/users > >>>> > >>>> > >>> > >>> -- > >>> Yaniv Bronhaim. > >>> > >> > > >

I likely won't automate this yet, as a lot of what's coming in 3.5 seems to obsolete many things I was doing previously via Puppet. In particular the Foreman integration and the ability to add custom iptables rules to engine-config. Previous posts on the list made is seem like modifying "IPTables" could potentially make upgrades less reliable. Created a gist of a working series of commands based on Alon's example using the Host Deploy Protocol [1]. https://gist.github.com/treydock/570a776b5c160bca7c9c Curious , where is the public key used by the ovirt-engine stored? The one that is available using command=get-ssh-trust. Is there a way to query it from the engine? I'm thinking if it would be possible to create a custom Facter face that stores the value of that public key so easier to re-use and access for deployment. Thanks, - Trey [1] - http://www.ovirt.org/Features/HostDeployProtocol On Tue, Aug 5, 2014 at 11:32 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 11:27:45 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Thanks for clarifying, makes sense now.
The public key trust needed for registration, is that the same key that would be used when adding host via UI?
yes.
you can download it via: $ curl 'http://engine/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY' $ curl 'http://engine/ovirt-engine/services/host-register?version=1&command=get-ssh-trust'
probably better to use https and verify CA certificate fingerprint if you do that from host.
Any examples of how to use the HostDeployProtocol [1]? I like the idea of using registration but haven't the slightest idea how to implement what's described in the docs [1]. I do recall seeing an article posted (searching email and can't find) that had a nice walk-through of how to use the oVirt API using browser tools. I'm unsure if this HostDeployProtocol would be done that way or via some other method.
there are two apis, the formal rest-api that is exposed by the engine and can be accessed using any rest api tool or ovirt-engine-cli, ovirt-engine-sdk-java, ovirt-engine-sdk-python wrappers. I sent you a minimal example in previous message.
and the host-deploy protocol[1], which should have been exposed in the rest-api, but for some reason I cannot understand it was not included in the public interface of the engine.
the advantage of using the rest-api is that you can achieve full cycle using the protocol, the add host cycle is what you seek.
the host-deploy protocol just register the host, but the sysadmin needs to approve the host via the ui (or via the rest api) before it is usable.
Thanks, - Trey
[1] http://www.ovirt.org/Features/HostDeployProtocol
On Tue, Aug 5, 2014 at 3:01 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 10:45:12 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Excellent, so installing 'ovirt-host-deploy' on each node then configuring the /etc/ovirt-host-deploy.conf.d files seems very automate-able, will see how it works in practice.
you do not need to install the ovirt-host-deploy, just create the files.
Regarding the actual host registration and getting the host added to ovirt-engine, are there other methods besides the API and the sdk? Would it be possible to configure the necessary ovirt-host-deploy.conf.d files then execute "ovirt-host-deploy"? I notice that running 'ovirt-host-deploy' wants to make whatever host executes it a ovir hypervisor but haven't yet run it all the way through as no server to test with at this time. There seems to be no "--help" or similar command line argument.
you should not run host-deploy directly, but via the engine's process, either registration or add host as I replied previously.
when base system is ready, you issue add host via api of engine or via ui, the other alternative is to register the host host the host-deploy protocol, and approve the host via api of engine or via ui.
I'm sure this will all be more clear once I attempt the steps and run through the motions. Will try to find a system to test on so I'm ready once our new servers arrive.
Thanks, - Trey
On Tue, Aug 5, 2014 at 2:23 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com> Sent: Tuesday, August 5, 2014 10:01:14 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Ah, thank you for the input! Just so I'm not spending time implementing the wrong changes, let me confirm I understand your comments.
1) Deploy host with Foreman 2) Apply Puppet catalog including ovirt Puppet module 3) Initiate host-deploy via rest API
In the ovirt module the following takes place:
2a) Add yum repos 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf
you can have any # of files with any prefix :))
For #2b I have a few questions
* The name of the ".conf" file is simply for sorting and labeling/organization, it has not functional impact on what those overrides apply to?
right.
* That file is managed on the ovirt-engine server, not the actual nodes?
currently on the host, in future we will provide a method to add this to engine database[1]
[1] http://gerrit.ovirt.org/#/c/27064/
* Is there any way to apply overrides to specific hosts? For example if I have some hosts that require a config and others that don't, how would I separate those *.conf files? This is more theoretical as right now my setup is common across all nodes.
the poppet module can put whatever required on each host.
For #3...the implementation of API calls from within Puppet is a challenge and one I can't tackle yet, but definitely will make it a goal for the future. In the mean time, what's the "manual" way to initiate host-deploy? Is there a CLI command that would have the same result as an API call or is the recommended way to perform the API call manually (ie curl)?
well, you can register host using the following protocol[1], but it is difficult to do this securely, what you actually need is to establish ssh trust for root with engine key then register.
you can also use the register command using curl by something like (I have not checked): https://admin%40internal:password@engine/ovirt-engine/api/hosts --- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <host> <name>host1</name> <address>dns</address> <ssh> <authentication_method>publickey</authentication_method> </ssh> <cluster id="cluster-uuid"/> </host> ---
you can also use the ovirt-engine-sdk-python package: --- import ovirtsdk.api import ovirtsdk.xml
sdk = ovirtsdk.api.API( url='https://host/ovirt-engine/api', username='admin@internal', password='password', insecure=True, ) sdk.hosts.add( ovirtsdk.xml.params.Host( name='host1', address='host1', cluster=engine_api.clusters.get( 'cluster' ), ssh=self._ovirtsdk_xml.params.SSH( authentication_method='publickey', ), ) ) ---
[1] http://www.ovirt.org/Features/HostDeployProtocol
Thanks! - Trey
On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl@redhat.com> wrote: > > > ----- Original Message ----- >> From: "Trey Dockendorf" <treydock@gmail.com> >> To: "ybronhei" <ybronhei@redhat.com> >> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" >> <fabiand@redhat.com>, >> "Dan >> Kenigsberg" <danken@redhat.com>, "Itamar >> Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, >> "Alon >> Bar-Lev" <alonbl@redhat.com> >> Sent: Tuesday, August 5, 2014 9:36:24 PM >> Subject: Re: [ovirt-users] Proper way to change and persist vdsm >> configuration options >> >> On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> >> wrote: >> > Hey, >> > >> > Just noticed something that I forgot about.. >> > before filing new BZ, see in ovirt-host-deploy README.environment >> > [1] >> > the >> > section: >> > VDSM/configOverride(bool) [True] >> > Override vdsm configuration file. >> > >> > changing it to false will keep your vdsm.conf file as is after >> > deploying >> > the >> > host again (what happens after node upgrade) >> > >> > [1] >> > https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment >> > >> > please check if that what you meant.. >> > >> > Thanks, >> > Yaniv Bronhaim. >> > >> >> I was unaware of that package. I will check that out as that seems >> to >> be what I am looking for. >> >> I have not filed this in BZ and will hold off pending >> ovirt-host-deploy. If you feel a BZ is still necessary then please >> do >> file one and I would be happy to provide input if it would help. >> >> Right now this is my workflow. >> >> 1. Foreman provisions bare-metal server with CentOS 6.5 >> 2. Once provisioned and system rebooted Puppet applies puppet-ovirt >> [1] module that adds the necessary yum repos > > and should stop here.. > >> , and installs packages. >> Part of my Puppet deployment is basic things like sudo management >> (vdsm's sudo is account for), sssd configuration, and other aspects >> that are needed by every system in my infrastructure. Part of the >> ovirt::node Puppet class is managing vdsm.conf, and in my case that >> means ensuring iSER is enabled for iSCSI over IB. > > you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf > --- > VDSM_CONFIG/section/key=str:content > --- > > this will create a proper vdsm.conf when host-deploy is initiated. > > you should now use the rest api to initiate host-deploy. > >> 3. Once host is online and has had the full Puppet catalog applied I >> log into ovirt-engine web interface and add those host (pulling it's >> data via the Foreman provider). > > right, but you should let this process install packages and manage > configuration. > >> What I've noticed is that after step #3, after a host is added by >> ovirt-engine, the vdsm.conf file is reset to default and I have to >> reapply Puppet before it can be used as the one of my Data Storage >> Domains requires iSER (not available over TCP). > > right, see above. > >> What would be the workflow using ovirt-host-deploy? Thus far I've >> had >> to piece together my workflow based on the documentation and filling >> in blanks where possible since I do require customizations to >> vdsm.conf and the documented workflow of adding a host via web UI >> does >> not allow for such customization. >> >> Thanks, >> - Trey >> >> [1] - https://github.com/treydock/puppet-ovirt (README not fully >> updated as still working out how to use Puppet with oVirt) >> >> > >> > On 08/05/2014 08:12 AM, Trey Dockendorf wrote: >> >> >> >> I'll file BZ. As far as I can recall this has been an issue >> >> since >> >> 3.3.x >> >> as >> >> I have been using Puppet to modify values and have had to rerun >> >> Puppet >> >> after installing a node via GUI and when performing update from >> >> GUI. >> >> Given >> >> that it has occurred when VDSM version didn't change on the node >> >> it >> >> seems >> >> likely to be something being done by Python code that bootstraps >> >> a >> >> node >> >> and >> >> performs the other tasks. I won't have any systems available to >> >> test >> >> with >> >> for a few days. New hardware specifically for our oVirt >> >> deployment >> >> is >> >> on >> >> order so should be able to more thoroughly debug and capture logs >> >> at >> >> that >> >> time. >> >> >> >> Would using vdsm-reg be a better solution for adding new nodes? >> >> I >> >> only >> >> tried using vdsm-reg once and it went very poorly...lots of >> >> missing >> >> dependencies not pulled in from yum install I had to install >> >> manually >> >> via >> >> yum. Then the node was auto added to newest cluster with no >> >> ability >> >> to >> >> change the cluster. Be happy to debug that too if there's some >> >> docs >> >> that >> >> outline the expected behavior. >> >> >> >> Using vdsm-reg or something similar seems like a better fit for >> >> puppet >> >> deployed nodes, as opposed to requiring GUI steps to add the >> >> node. >> >> >> >> Thanks >> >> - Trey >> >> On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote: >> >> >> >>> On 07/31/2014 01:28 AM, Trey Dockendorf wrote: >> >>> >> >>>> I'm running ovirt nodes that are stock CentOS 6.5 systems with >> >>>> VDSM >> >>>> installed. I am using iSER to do iSCSI over RDMA and to make >> >>>> that >> >>>> work I have to modify /etc/vdsm/vdsm.conf to include the >> >>>> following: >> >>>> >> >>>> [irs] >> >>>> iscsi_default_ifaces = iser,default >> >>>> >> >>>> I've noticed that any time I upgrade a node from the engine web >> >>>> interface that changes to vdsm.conf are wiped out. I don't >> >>>> know >> >>>> if >> >>>> this is being done by the configuration code or by the vdsm >> >>>> package. >> >>>> Is there a more reliable way to ensure changes to vdsm.conf are >> >>>> NOT >> >>>> removed automatically? >> >>>> >> >>> >> >>> Hey, >> >>> >> >>> vdsm.conf shouldn't wiped out and shouldn't changed at all >> >>> during >> >>> upgrade. >> >>> other related conf files (such as libvirtd.conf) might be >> >>> overrided >> >>> to >> >>> keep >> >>> defaults configurations for vdsm. but vdsm.conf should persist >> >>> with >> >>> user's >> >>> modification. from my check, regular yum upgrade doesn't touch >> >>> vdsm.conf >> >>> >> >>> Douglas can you verify that with node upgrade? might be specific >> >>> to >> >>> that >> >>> flow.. >> >>> >> >>> Trey, can file a bugzilla on that and describe your steps there? >> >>> >> >>> Thanks >> >>> >> >>> Yaniv Bronhaim, >> >>> >> >>>> >> >>>> Thanks, >> >>>> - Trey >> >>>> _______________________________________________ >> >>>> Users mailing list >> >>>> Users@ovirt.org >> >>>> http://lists.ovirt.org/mailman/listinfo/users >> >>>> >> >>>> >> >>> >> >>> -- >> >>> Yaniv Bronhaim. >> >>> >> >> >> > >>

----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Thursday, August 21, 2014 7:15:56 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
I likely won't automate this yet, as a lot of what's coming in 3.5 seems to obsolete many things I was doing previously via Puppet. In particular the Foreman integration and the ability to add custom iptables rules to engine-config. Previous posts on the list made is seem like modifying "IPTables" could potentially make upgrades less reliable.
Created a gist of a working series of commands based on Alon's example using the Host Deploy Protocol [1].
https://gist.github.com/treydock/570a776b5c160bca7c9c
Curious , where is the public key used by the ovirt-engine stored? The one that is available using command=get-ssh-trust. Is there a way to query it from the engine? I'm thinking if it would be possible to create a custom Facter face that stores the value of that public key so easier to re-use and access for deployment.
/etc/pki/ovirt-engine/certs/engine.cer
Thanks, - Trey
[1] - http://www.ovirt.org/Features/HostDeployProtocol
On Tue, Aug 5, 2014 at 11:32 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 11:27:45 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Thanks for clarifying, makes sense now.
The public key trust needed for registration, is that the same key that would be used when adding host via UI?
yes.
you can download it via: $ curl 'http://engine/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY' $ curl 'http://engine/ovirt-engine/services/host-register?version=1&command=get-ssh-trust'
probably better to use https and verify CA certificate fingerprint if you do that from host.
Any examples of how to use the HostDeployProtocol [1]? I like the idea of using registration but haven't the slightest idea how to implement what's described in the docs [1]. I do recall seeing an article posted (searching email and can't find) that had a nice walk-through of how to use the oVirt API using browser tools. I'm unsure if this HostDeployProtocol would be done that way or via some other method.
there are two apis, the formal rest-api that is exposed by the engine and can be accessed using any rest api tool or ovirt-engine-cli, ovirt-engine-sdk-java, ovirt-engine-sdk-python wrappers. I sent you a minimal example in previous message.
and the host-deploy protocol[1], which should have been exposed in the rest-api, but for some reason I cannot understand it was not included in the public interface of the engine.
the advantage of using the rest-api is that you can achieve full cycle using the protocol, the add host cycle is what you seek.
the host-deploy protocol just register the host, but the sysadmin needs to approve the host via the ui (or via the rest api) before it is usable.
Thanks, - Trey
[1] http://www.ovirt.org/Features/HostDeployProtocol
On Tue, Aug 5, 2014 at 3:01 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 10:45:12 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Excellent, so installing 'ovirt-host-deploy' on each node then configuring the /etc/ovirt-host-deploy.conf.d files seems very automate-able, will see how it works in practice.
you do not need to install the ovirt-host-deploy, just create the files.
Regarding the actual host registration and getting the host added to ovirt-engine, are there other methods besides the API and the sdk? Would it be possible to configure the necessary ovirt-host-deploy.conf.d files then execute "ovirt-host-deploy"? I notice that running 'ovirt-host-deploy' wants to make whatever host executes it a ovir hypervisor but haven't yet run it all the way through as no server to test with at this time. There seems to be no "--help" or similar command line argument.
you should not run host-deploy directly, but via the engine's process, either registration or add host as I replied previously.
when base system is ready, you issue add host via api of engine or via ui, the other alternative is to register the host host the host-deploy protocol, and approve the host via api of engine or via ui.
I'm sure this will all be more clear once I attempt the steps and run through the motions. Will try to find a system to test on so I'm ready once our new servers arrive.
Thanks, - Trey
On Tue, Aug 5, 2014 at 2:23 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message ----- > From: "Trey Dockendorf" <treydock@gmail.com> > To: "Alon Bar-Lev" <alonbl@redhat.com> > Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, > "Fabian > Deutsch" <fabiand@redhat.com>, "Dan > Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, > "Douglas Landgraf" <dougsland@redhat.com> > Sent: Tuesday, August 5, 2014 10:01:14 PM > Subject: Re: [ovirt-users] Proper way to change and persist vdsm > configuration options > > Ah, thank you for the input! Just so I'm not spending time > implementing the wrong changes, let me confirm I understand your > comments. > > 1) Deploy host with Foreman > 2) Apply Puppet catalog including ovirt Puppet module > 3) Initiate host-deploy via rest API > > In the ovirt module the following takes place: > > 2a) Add yum repos > 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf >
you can have any # of files with any prefix :))
> For #2b I have a few questions > > * The name of the ".conf" file is simply for sorting and > labeling/organization, it has not functional impact on what those > overrides apply to?
right.
> * That file is managed on the ovirt-engine server, not the actual > nodes?
currently on the host, in future we will provide a method to add this to engine database[1]
[1] http://gerrit.ovirt.org/#/c/27064/
> * Is there any way to apply overrides to specific hosts? For > example > if I have some hosts that require a config and others that don't, > how > would I separate those *.conf files? This is more theoretical as > right now my setup is common across all nodes.
the poppet module can put whatever required on each host.
> For #3...the implementation of API calls from within Puppet is a > challenge and one I can't tackle yet, but definitely will make it a > goal for the future. In the mean time, what's the "manual" way to > initiate host-deploy? Is there a CLI command that would have the > same > result as an API call or is the recommended way to perform the API > call manually (ie curl)?
well, you can register host using the following protocol[1], but it is difficult to do this securely, what you actually need is to establish ssh trust for root with engine key then register.
you can also use the register command using curl by something like (I have not checked): https://admin%40internal:password@engine/ovirt-engine/api/hosts --- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <host> <name>host1</name> <address>dns</address> <ssh> <authentication_method>publickey</authentication_method> </ssh> <cluster id="cluster-uuid"/> </host> ---
you can also use the ovirt-engine-sdk-python package: --- import ovirtsdk.api import ovirtsdk.xml
sdk = ovirtsdk.api.API( url='https://host/ovirt-engine/api', username='admin@internal', password='password', insecure=True, ) sdk.hosts.add( ovirtsdk.xml.params.Host( name='host1', address='host1', cluster=engine_api.clusters.get( 'cluster' ), ssh=self._ovirtsdk_xml.params.SSH( authentication_method='publickey', ), ) ) ---
[1] http://www.ovirt.org/Features/HostDeployProtocol
> > Thanks! > - Trey > > On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl@redhat.com> > wrote: > > > > > > ----- Original Message ----- > >> From: "Trey Dockendorf" <treydock@gmail.com> > >> To: "ybronhei" <ybronhei@redhat.com> > >> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" > >> <fabiand@redhat.com>, > >> "Dan > >> Kenigsberg" <danken@redhat.com>, "Itamar > >> Heim" <iheim@redhat.com>, "Douglas Landgraf" > >> <dougsland@redhat.com>, > >> "Alon > >> Bar-Lev" <alonbl@redhat.com> > >> Sent: Tuesday, August 5, 2014 9:36:24 PM > >> Subject: Re: [ovirt-users] Proper way to change and persist vdsm > >> configuration options > >> > >> On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> > >> wrote: > >> > Hey, > >> > > >> > Just noticed something that I forgot about.. > >> > before filing new BZ, see in ovirt-host-deploy > >> > README.environment > >> > [1] > >> > the > >> > section: > >> > VDSM/configOverride(bool) [True] > >> > Override vdsm configuration file. > >> > > >> > changing it to false will keep your vdsm.conf file as is after > >> > deploying > >> > the > >> > host again (what happens after node upgrade) > >> > > >> > [1] > >> > https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment > >> > > >> > please check if that what you meant.. > >> > > >> > Thanks, > >> > Yaniv Bronhaim. > >> > > >> > >> I was unaware of that package. I will check that out as that > >> seems > >> to > >> be what I am looking for. > >> > >> I have not filed this in BZ and will hold off pending > >> ovirt-host-deploy. If you feel a BZ is still necessary then > >> please > >> do > >> file one and I would be happy to provide input if it would help. > >> > >> Right now this is my workflow. > >> > >> 1. Foreman provisions bare-metal server with CentOS 6.5 > >> 2. Once provisioned and system rebooted Puppet applies > >> puppet-ovirt > >> [1] module that adds the necessary yum repos > > > > and should stop here.. > > > >> , and installs packages. > >> Part of my Puppet deployment is basic things like sudo management > >> (vdsm's sudo is account for), sssd configuration, and other > >> aspects > >> that are needed by every system in my infrastructure. Part of > >> the > >> ovirt::node Puppet class is managing vdsm.conf, and in my case > >> that > >> means ensuring iSER is enabled for iSCSI over IB. > > > > you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf > > --- > > VDSM_CONFIG/section/key=str:content > > --- > > > > this will create a proper vdsm.conf when host-deploy is initiated. > > > > you should now use the rest api to initiate host-deploy. > > > >> 3. Once host is online and has had the full Puppet catalog > >> applied I > >> log into ovirt-engine web interface and add those host (pulling > >> it's > >> data via the Foreman provider). > > > > right, but you should let this process install packages and manage > > configuration. > > > >> What I've noticed is that after step #3, after a host is added by > >> ovirt-engine, the vdsm.conf file is reset to default and I have > >> to > >> reapply Puppet before it can be used as the one of my Data > >> Storage > >> Domains requires iSER (not available over TCP). > > > > right, see above. > > > >> What would be the workflow using ovirt-host-deploy? Thus far > >> I've > >> had > >> to piece together my workflow based on the documentation and > >> filling > >> in blanks where possible since I do require customizations to > >> vdsm.conf and the documented workflow of adding a host via web UI > >> does > >> not allow for such customization. > >> > >> Thanks, > >> - Trey > >> > >> [1] - https://github.com/treydock/puppet-ovirt (README not fully > >> updated as still working out how to use Puppet with oVirt) > >> > >> > > >> > On 08/05/2014 08:12 AM, Trey Dockendorf wrote: > >> >> > >> >> I'll file BZ. As far as I can recall this has been an issue > >> >> since > >> >> 3.3.x > >> >> as > >> >> I have been using Puppet to modify values and have had to > >> >> rerun > >> >> Puppet > >> >> after installing a node via GUI and when performing update > >> >> from > >> >> GUI. > >> >> Given > >> >> that it has occurred when VDSM version didn't change on the > >> >> node > >> >> it > >> >> seems > >> >> likely to be something being done by Python code that > >> >> bootstraps > >> >> a > >> >> node > >> >> and > >> >> performs the other tasks. I won't have any systems available > >> >> to > >> >> test > >> >> with > >> >> for a few days. New hardware specifically for our oVirt > >> >> deployment > >> >> is > >> >> on > >> >> order so should be able to more thoroughly debug and capture > >> >> logs > >> >> at > >> >> that > >> >> time. > >> >> > >> >> Would using vdsm-reg be a better solution for adding new > >> >> nodes? > >> >> I > >> >> only > >> >> tried using vdsm-reg once and it went very poorly...lots of > >> >> missing > >> >> dependencies not pulled in from yum install I had to install > >> >> manually > >> >> via > >> >> yum. Then the node was auto added to newest cluster with no > >> >> ability > >> >> to > >> >> change the cluster. Be happy to debug that too if there's > >> >> some > >> >> docs > >> >> that > >> >> outline the expected behavior. > >> >> > >> >> Using vdsm-reg or something similar seems like a better fit > >> >> for > >> >> puppet > >> >> deployed nodes, as opposed to requiring GUI steps to add the > >> >> node. > >> >> > >> >> Thanks > >> >> - Trey > >> >> On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> > >> >> wrote: > >> >> > >> >>> On 07/31/2014 01:28 AM, Trey Dockendorf wrote: > >> >>> > >> >>>> I'm running ovirt nodes that are stock CentOS 6.5 systems > >> >>>> with > >> >>>> VDSM > >> >>>> installed. I am using iSER to do iSCSI over RDMA and to > >> >>>> make > >> >>>> that > >> >>>> work I have to modify /etc/vdsm/vdsm.conf to include the > >> >>>> following: > >> >>>> > >> >>>> [irs] > >> >>>> iscsi_default_ifaces = iser,default > >> >>>> > >> >>>> I've noticed that any time I upgrade a node from the engine > >> >>>> web > >> >>>> interface that changes to vdsm.conf are wiped out. I don't > >> >>>> know > >> >>>> if > >> >>>> this is being done by the configuration code or by the vdsm > >> >>>> package. > >> >>>> Is there a more reliable way to ensure changes to vdsm.conf > >> >>>> are > >> >>>> NOT > >> >>>> removed automatically? > >> >>>> > >> >>> > >> >>> Hey, > >> >>> > >> >>> vdsm.conf shouldn't wiped out and shouldn't changed at all > >> >>> during > >> >>> upgrade. > >> >>> other related conf files (such as libvirtd.conf) might be > >> >>> overrided > >> >>> to > >> >>> keep > >> >>> defaults configurations for vdsm. but vdsm.conf should > >> >>> persist > >> >>> with > >> >>> user's > >> >>> modification. from my check, regular yum upgrade doesn't > >> >>> touch > >> >>> vdsm.conf > >> >>> > >> >>> Douglas can you verify that with node upgrade? might be > >> >>> specific > >> >>> to > >> >>> that > >> >>> flow.. > >> >>> > >> >>> Trey, can file a bugzilla on that and describe your steps > >> >>> there? > >> >>> > >> >>> Thanks > >> >>> > >> >>> Yaniv Bronhaim, > >> >>> > >> >>>> > >> >>>> Thanks, > >> >>>> - Trey > >> >>>> _______________________________________________ > >> >>>> Users mailing list > >> >>>> Users@ovirt.org > >> >>>> http://lists.ovirt.org/mailman/listinfo/users > >> >>>> > >> >>>> > >> >>> > >> >>> -- > >> >>> Yaniv Bronhaim. > >> >>> > >> >> > >> > > >> >

Sorry, I meant the SSH public key. Is that a file or in the database? I did a "grep" for the public key downloaded via the command=get-ssh-trust and found no files in /etc/ or /var/lib/ovirt-engine that matched. - Trey On Thu, Aug 21, 2014 at 11:33 AM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Thursday, August 21, 2014 7:15:56 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
I likely won't automate this yet, as a lot of what's coming in 3.5 seems to obsolete many things I was doing previously via Puppet. In particular the Foreman integration and the ability to add custom iptables rules to engine-config. Previous posts on the list made is seem like modifying "IPTables" could potentially make upgrades less reliable.
Created a gist of a working series of commands based on Alon's example using the Host Deploy Protocol [1].
https://gist.github.com/treydock/570a776b5c160bca7c9c
Curious , where is the public key used by the ovirt-engine stored? The one that is available using command=get-ssh-trust. Is there a way to query it from the engine? I'm thinking if it would be possible to create a custom Facter face that stores the value of that public key so easier to re-use and access for deployment.
/etc/pki/ovirt-engine/certs/engine.cer
Thanks, - Trey
[1] - http://www.ovirt.org/Features/HostDeployProtocol
On Tue, Aug 5, 2014 at 11:32 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 11:27:45 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Thanks for clarifying, makes sense now.
The public key trust needed for registration, is that the same key that would be used when adding host via UI?
yes.
you can download it via: $ curl 'http://engine/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY' $ curl 'http://engine/ovirt-engine/services/host-register?version=1&command=get-ssh-trust'
probably better to use https and verify CA certificate fingerprint if you do that from host.
Any examples of how to use the HostDeployProtocol [1]? I like the idea of using registration but haven't the slightest idea how to implement what's described in the docs [1]. I do recall seeing an article posted (searching email and can't find) that had a nice walk-through of how to use the oVirt API using browser tools. I'm unsure if this HostDeployProtocol would be done that way or via some other method.
there are two apis, the formal rest-api that is exposed by the engine and can be accessed using any rest api tool or ovirt-engine-cli, ovirt-engine-sdk-java, ovirt-engine-sdk-python wrappers. I sent you a minimal example in previous message.
and the host-deploy protocol[1], which should have been exposed in the rest-api, but for some reason I cannot understand it was not included in the public interface of the engine.
the advantage of using the rest-api is that you can achieve full cycle using the protocol, the add host cycle is what you seek.
the host-deploy protocol just register the host, but the sysadmin needs to approve the host via the ui (or via the rest api) before it is usable.
Thanks, - Trey
[1] http://www.ovirt.org/Features/HostDeployProtocol
On Tue, Aug 5, 2014 at 3:01 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 10:45:12 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Excellent, so installing 'ovirt-host-deploy' on each node then configuring the /etc/ovirt-host-deploy.conf.d files seems very automate-able, will see how it works in practice.
you do not need to install the ovirt-host-deploy, just create the files.
Regarding the actual host registration and getting the host added to ovirt-engine, are there other methods besides the API and the sdk? Would it be possible to configure the necessary ovirt-host-deploy.conf.d files then execute "ovirt-host-deploy"? I notice that running 'ovirt-host-deploy' wants to make whatever host executes it a ovir hypervisor but haven't yet run it all the way through as no server to test with at this time. There seems to be no "--help" or similar command line argument.
you should not run host-deploy directly, but via the engine's process, either registration or add host as I replied previously.
when base system is ready, you issue add host via api of engine or via ui, the other alternative is to register the host host the host-deploy protocol, and approve the host via api of engine or via ui.
I'm sure this will all be more clear once I attempt the steps and run through the motions. Will try to find a system to test on so I'm ready once our new servers arrive.
Thanks, - Trey
On Tue, Aug 5, 2014 at 2:23 PM, Alon Bar-Lev <alonbl@redhat.com> wrote: > > > ----- Original Message ----- >> From: "Trey Dockendorf" <treydock@gmail.com> >> To: "Alon Bar-Lev" <alonbl@redhat.com> >> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, >> "Fabian >> Deutsch" <fabiand@redhat.com>, "Dan >> Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, >> "Douglas Landgraf" <dougsland@redhat.com> >> Sent: Tuesday, August 5, 2014 10:01:14 PM >> Subject: Re: [ovirt-users] Proper way to change and persist vdsm >> configuration options >> >> Ah, thank you for the input! Just so I'm not spending time >> implementing the wrong changes, let me confirm I understand your >> comments. >> >> 1) Deploy host with Foreman >> 2) Apply Puppet catalog including ovirt Puppet module >> 3) Initiate host-deploy via rest API >> >> In the ovirt module the following takes place: >> >> 2a) Add yum repos >> 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf >> > > you can have any # of files with any prefix :)) > >> For #2b I have a few questions >> >> * The name of the ".conf" file is simply for sorting and >> labeling/organization, it has not functional impact on what those >> overrides apply to? > > right. > >> * That file is managed on the ovirt-engine server, not the actual >> nodes? > > currently on the host, in future we will provide a method to add this > to > engine database[1] > > [1] http://gerrit.ovirt.org/#/c/27064/ > >> * Is there any way to apply overrides to specific hosts? For >> example >> if I have some hosts that require a config and others that don't, >> how >> would I separate those *.conf files? This is more theoretical as >> right now my setup is common across all nodes. > > the poppet module can put whatever required on each host. > >> For #3...the implementation of API calls from within Puppet is a >> challenge and one I can't tackle yet, but definitely will make it a >> goal for the future. In the mean time, what's the "manual" way to >> initiate host-deploy? Is there a CLI command that would have the >> same >> result as an API call or is the recommended way to perform the API >> call manually (ie curl)? > > well, you can register host using the following protocol[1], but it > is > difficult to do this securely, what you actually need is to establish > ssh > trust for root with engine key then register. > > you can also use the register command using curl by something like (I > have > not checked): > https://admin%40internal:password@engine/ovirt-engine/api/hosts > --- > <?xml version="1.0" encoding="UTF-8" standalone="yes"?> > <host> > <name>host1</name> > <address>dns</address> > <ssh> > <authentication_method>publickey</authentication_method> > </ssh> > <cluster id="cluster-uuid"/> > </host> > --- > > you can also use the ovirt-engine-sdk-python package: > --- > import ovirtsdk.api > import ovirtsdk.xml > > sdk = ovirtsdk.api.API( > url='https://host/ovirt-engine/api', > username='admin@internal', > password='password', > insecure=True, > ) > sdk.hosts.add( > ovirtsdk.xml.params.Host( > name='host1', > address='host1', > cluster=engine_api.clusters.get( > 'cluster' > ), > ssh=self._ovirtsdk_xml.params.SSH( > authentication_method='publickey', > ), > ) > ) > --- > > [1] http://www.ovirt.org/Features/HostDeployProtocol > >> >> Thanks! >> - Trey >> >> On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl@redhat.com> >> wrote: >> > >> > >> > ----- Original Message ----- >> >> From: "Trey Dockendorf" <treydock@gmail.com> >> >> To: "ybronhei" <ybronhei@redhat.com> >> >> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" >> >> <fabiand@redhat.com>, >> >> "Dan >> >> Kenigsberg" <danken@redhat.com>, "Itamar >> >> Heim" <iheim@redhat.com>, "Douglas Landgraf" >> >> <dougsland@redhat.com>, >> >> "Alon >> >> Bar-Lev" <alonbl@redhat.com> >> >> Sent: Tuesday, August 5, 2014 9:36:24 PM >> >> Subject: Re: [ovirt-users] Proper way to change and persist vdsm >> >> configuration options >> >> >> >> On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> >> >> wrote: >> >> > Hey, >> >> > >> >> > Just noticed something that I forgot about.. >> >> > before filing new BZ, see in ovirt-host-deploy >> >> > README.environment >> >> > [1] >> >> > the >> >> > section: >> >> > VDSM/configOverride(bool) [True] >> >> > Override vdsm configuration file. >> >> > >> >> > changing it to false will keep your vdsm.conf file as is after >> >> > deploying >> >> > the >> >> > host again (what happens after node upgrade) >> >> > >> >> > [1] >> >> > https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment >> >> > >> >> > please check if that what you meant.. >> >> > >> >> > Thanks, >> >> > Yaniv Bronhaim. >> >> > >> >> >> >> I was unaware of that package. I will check that out as that >> >> seems >> >> to >> >> be what I am looking for. >> >> >> >> I have not filed this in BZ and will hold off pending >> >> ovirt-host-deploy. If you feel a BZ is still necessary then >> >> please >> >> do >> >> file one and I would be happy to provide input if it would help. >> >> >> >> Right now this is my workflow. >> >> >> >> 1. Foreman provisions bare-metal server with CentOS 6.5 >> >> 2. Once provisioned and system rebooted Puppet applies >> >> puppet-ovirt >> >> [1] module that adds the necessary yum repos >> > >> > and should stop here.. >> > >> >> , and installs packages. >> >> Part of my Puppet deployment is basic things like sudo management >> >> (vdsm's sudo is account for), sssd configuration, and other >> >> aspects >> >> that are needed by every system in my infrastructure. Part of >> >> the >> >> ovirt::node Puppet class is managing vdsm.conf, and in my case >> >> that >> >> means ensuring iSER is enabled for iSCSI over IB. >> > >> > you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf >> > --- >> > VDSM_CONFIG/section/key=str:content >> > --- >> > >> > this will create a proper vdsm.conf when host-deploy is initiated. >> > >> > you should now use the rest api to initiate host-deploy. >> > >> >> 3. Once host is online and has had the full Puppet catalog >> >> applied I >> >> log into ovirt-engine web interface and add those host (pulling >> >> it's >> >> data via the Foreman provider). >> > >> > right, but you should let this process install packages and manage >> > configuration. >> > >> >> What I've noticed is that after step #3, after a host is added by >> >> ovirt-engine, the vdsm.conf file is reset to default and I have >> >> to >> >> reapply Puppet before it can be used as the one of my Data >> >> Storage >> >> Domains requires iSER (not available over TCP). >> > >> > right, see above. >> > >> >> What would be the workflow using ovirt-host-deploy? Thus far >> >> I've >> >> had >> >> to piece together my workflow based on the documentation and >> >> filling >> >> in blanks where possible since I do require customizations to >> >> vdsm.conf and the documented workflow of adding a host via web UI >> >> does >> >> not allow for such customization. >> >> >> >> Thanks, >> >> - Trey >> >> >> >> [1] - https://github.com/treydock/puppet-ovirt (README not fully >> >> updated as still working out how to use Puppet with oVirt) >> >> >> >> > >> >> > On 08/05/2014 08:12 AM, Trey Dockendorf wrote: >> >> >> >> >> >> I'll file BZ. As far as I can recall this has been an issue >> >> >> since >> >> >> 3.3.x >> >> >> as >> >> >> I have been using Puppet to modify values and have had to >> >> >> rerun >> >> >> Puppet >> >> >> after installing a node via GUI and when performing update >> >> >> from >> >> >> GUI. >> >> >> Given >> >> >> that it has occurred when VDSM version didn't change on the >> >> >> node >> >> >> it >> >> >> seems >> >> >> likely to be something being done by Python code that >> >> >> bootstraps >> >> >> a >> >> >> node >> >> >> and >> >> >> performs the other tasks. I won't have any systems available >> >> >> to >> >> >> test >> >> >> with >> >> >> for a few days. New hardware specifically for our oVirt >> >> >> deployment >> >> >> is >> >> >> on >> >> >> order so should be able to more thoroughly debug and capture >> >> >> logs >> >> >> at >> >> >> that >> >> >> time. >> >> >> >> >> >> Would using vdsm-reg be a better solution for adding new >> >> >> nodes? >> >> >> I >> >> >> only >> >> >> tried using vdsm-reg once and it went very poorly...lots of >> >> >> missing >> >> >> dependencies not pulled in from yum install I had to install >> >> >> manually >> >> >> via >> >> >> yum. Then the node was auto added to newest cluster with no >> >> >> ability >> >> >> to >> >> >> change the cluster. Be happy to debug that too if there's >> >> >> some >> >> >> docs >> >> >> that >> >> >> outline the expected behavior. >> >> >> >> >> >> Using vdsm-reg or something similar seems like a better fit >> >> >> for >> >> >> puppet >> >> >> deployed nodes, as opposed to requiring GUI steps to add the >> >> >> node. >> >> >> >> >> >> Thanks >> >> >> - Trey >> >> >> On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> >> >> >> wrote: >> >> >> >> >> >>> On 07/31/2014 01:28 AM, Trey Dockendorf wrote: >> >> >>> >> >> >>>> I'm running ovirt nodes that are stock CentOS 6.5 systems >> >> >>>> with >> >> >>>> VDSM >> >> >>>> installed. I am using iSER to do iSCSI over RDMA and to >> >> >>>> make >> >> >>>> that >> >> >>>> work I have to modify /etc/vdsm/vdsm.conf to include the >> >> >>>> following: >> >> >>>> >> >> >>>> [irs] >> >> >>>> iscsi_default_ifaces = iser,default >> >> >>>> >> >> >>>> I've noticed that any time I upgrade a node from the engine >> >> >>>> web >> >> >>>> interface that changes to vdsm.conf are wiped out. I don't >> >> >>>> know >> >> >>>> if >> >> >>>> this is being done by the configuration code or by the vdsm >> >> >>>> package. >> >> >>>> Is there a more reliable way to ensure changes to vdsm.conf >> >> >>>> are >> >> >>>> NOT >> >> >>>> removed automatically? >> >> >>>> >> >> >>> >> >> >>> Hey, >> >> >>> >> >> >>> vdsm.conf shouldn't wiped out and shouldn't changed at all >> >> >>> during >> >> >>> upgrade. >> >> >>> other related conf files (such as libvirtd.conf) might be >> >> >>> overrided >> >> >>> to >> >> >>> keep >> >> >>> defaults configurations for vdsm. but vdsm.conf should >> >> >>> persist >> >> >>> with >> >> >>> user's >> >> >>> modification. from my check, regular yum upgrade doesn't >> >> >>> touch >> >> >>> vdsm.conf >> >> >>> >> >> >>> Douglas can you verify that with node upgrade? might be >> >> >>> specific >> >> >>> to >> >> >>> that >> >> >>> flow.. >> >> >>> >> >> >>> Trey, can file a bugzilla on that and describe your steps >> >> >>> there? >> >> >>> >> >> >>> Thanks >> >> >>> >> >> >>> Yaniv Bronhaim, >> >> >>> >> >> >>>> >> >> >>>> Thanks, >> >> >>>> - Trey >> >> >>>> _______________________________________________ >> >> >>>> Users mailing list >> >> >>>> Users@ovirt.org >> >> >>>> http://lists.ovirt.org/mailman/listinfo/users >> >> >>>> >> >> >>>> >> >> >>> >> >> >>> -- >> >> >>> Yaniv Bronhaim. >> >> >>> >> >> >> >> >> > >> >> >>

----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Thursday, August 21, 2014 9:41:03 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Sorry, I meant the SSH public key. Is that a file or in the database? I did a "grep" for the public key downloaded via the command=get-ssh-trust and found no files in /etc/ or /var/lib/ovirt-engine that matched.
openssl x509 -in /etc/pki/ovirt-engine/certs/engine.cer -noout -pubkey | ssh-keygen -i -m PKCS8 -f /dev/stdin
- Trey
On Thu, Aug 21, 2014 at 11:33 AM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Thursday, August 21, 2014 7:15:56 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
I likely won't automate this yet, as a lot of what's coming in 3.5 seems to obsolete many things I was doing previously via Puppet. In particular the Foreman integration and the ability to add custom iptables rules to engine-config. Previous posts on the list made is seem like modifying "IPTables" could potentially make upgrades less reliable.
Created a gist of a working series of commands based on Alon's example using the Host Deploy Protocol [1].
https://gist.github.com/treydock/570a776b5c160bca7c9c
Curious , where is the public key used by the ovirt-engine stored? The one that is available using command=get-ssh-trust. Is there a way to query it from the engine? I'm thinking if it would be possible to create a custom Facter face that stores the value of that public key so easier to re-use and access for deployment.
/etc/pki/ovirt-engine/certs/engine.cer
Thanks, - Trey
[1] - http://www.ovirt.org/Features/HostDeployProtocol
On Tue, Aug 5, 2014 at 11:32 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 11:27:45 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Thanks for clarifying, makes sense now.
The public key trust needed for registration, is that the same key that would be used when adding host via UI?
yes.
you can download it via: $ curl 'http://engine/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY' $ curl 'http://engine/ovirt-engine/services/host-register?version=1&command=get-ssh-trust'
probably better to use https and verify CA certificate fingerprint if you do that from host.
Any examples of how to use the HostDeployProtocol [1]? I like the idea of using registration but haven't the slightest idea how to implement what's described in the docs [1]. I do recall seeing an article posted (searching email and can't find) that had a nice walk-through of how to use the oVirt API using browser tools. I'm unsure if this HostDeployProtocol would be done that way or via some other method.
there are two apis, the formal rest-api that is exposed by the engine and can be accessed using any rest api tool or ovirt-engine-cli, ovirt-engine-sdk-java, ovirt-engine-sdk-python wrappers. I sent you a minimal example in previous message.
and the host-deploy protocol[1], which should have been exposed in the rest-api, but for some reason I cannot understand it was not included in the public interface of the engine.
the advantage of using the rest-api is that you can achieve full cycle using the protocol, the add host cycle is what you seek.
the host-deploy protocol just register the host, but the sysadmin needs to approve the host via the ui (or via the rest api) before it is usable.
Thanks, - Trey
[1] http://www.ovirt.org/Features/HostDeployProtocol
On Tue, Aug 5, 2014 at 3:01 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message ----- > From: "Trey Dockendorf" <treydock@gmail.com> > To: "Alon Bar-Lev" <alonbl@redhat.com> > Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, > "Fabian > Deutsch" <fabiand@redhat.com>, "Dan > Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, > "Douglas Landgraf" <dougsland@redhat.com>, "Oved > Ourfali" <ovedo@redhat.com> > Sent: Tuesday, August 5, 2014 10:45:12 PM > Subject: Re: [ovirt-users] Proper way to change and persist vdsm > configuration options > > Excellent, so installing 'ovirt-host-deploy' on each node then > configuring the /etc/ovirt-host-deploy.conf.d files seems very > automate-able, will see how it works in practice.
you do not need to install the ovirt-host-deploy, just create the files.
> Regarding the actual host registration and getting the host added to > ovirt-engine, are there other methods besides the API and the sdk? > Would it be possible to configure the necessary > ovirt-host-deploy.conf.d files then execute "ovirt-host-deploy"? I > notice that running 'ovirt-host-deploy' wants to make whatever host > executes it a ovir hypervisor but haven't yet run it all the way > through as no server to test with at this time. There seems to be > no > "--help" or similar command line argument.
you should not run host-deploy directly, but via the engine's process, either registration or add host as I replied previously.
when base system is ready, you issue add host via api of engine or via ui, the other alternative is to register the host host the host-deploy protocol, and approve the host via api of engine or via ui.
> I'm sure this will all be more clear once I attempt the steps and > run > through the motions. Will try to find a system to test on so I'm > ready once our new servers arrive. > > Thanks, > - Trey > > On Tue, Aug 5, 2014 at 2:23 PM, Alon Bar-Lev <alonbl@redhat.com> > wrote: > > > > > > ----- Original Message ----- > >> From: "Trey Dockendorf" <treydock@gmail.com> > >> To: "Alon Bar-Lev" <alonbl@redhat.com> > >> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, > >> "Fabian > >> Deutsch" <fabiand@redhat.com>, "Dan > >> Kenigsberg" <danken@redhat.com>, "Itamar Heim" > >> <iheim@redhat.com>, > >> "Douglas Landgraf" <dougsland@redhat.com> > >> Sent: Tuesday, August 5, 2014 10:01:14 PM > >> Subject: Re: [ovirt-users] Proper way to change and persist vdsm > >> configuration options > >> > >> Ah, thank you for the input! Just so I'm not spending time > >> implementing the wrong changes, let me confirm I understand your > >> comments. > >> > >> 1) Deploy host with Foreman > >> 2) Apply Puppet catalog including ovirt Puppet module > >> 3) Initiate host-deploy via rest API > >> > >> In the ovirt module the following takes place: > >> > >> 2a) Add yum repos > >> 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf > >> > > > > you can have any # of files with any prefix :)) > > > >> For #2b I have a few questions > >> > >> * The name of the ".conf" file is simply for sorting and > >> labeling/organization, it has not functional impact on what those > >> overrides apply to? > > > > right. > > > >> * That file is managed on the ovirt-engine server, not the actual > >> nodes? > > > > currently on the host, in future we will provide a method to add > > this > > to > > engine database[1] > > > > [1] http://gerrit.ovirt.org/#/c/27064/ > > > >> * Is there any way to apply overrides to specific hosts? For > >> example > >> if I have some hosts that require a config and others that don't, > >> how > >> would I separate those *.conf files? This is more theoretical as > >> right now my setup is common across all nodes. > > > > the poppet module can put whatever required on each host. > > > >> For #3...the implementation of API calls from within Puppet is a > >> challenge and one I can't tackle yet, but definitely will make it > >> a > >> goal for the future. In the mean time, what's the "manual" way > >> to > >> initiate host-deploy? Is there a CLI command that would have the > >> same > >> result as an API call or is the recommended way to perform the > >> API > >> call manually (ie curl)? > > > > well, you can register host using the following protocol[1], but > > it > > is > > difficult to do this securely, what you actually need is to > > establish > > ssh > > trust for root with engine key then register. > > > > you can also use the register command using curl by something like > > (I > > have > > not checked): > > https://admin%40internal:password@engine/ovirt-engine/api/hosts > > --- > > <?xml version="1.0" encoding="UTF-8" standalone="yes"?> > > <host> > > <name>host1</name> > > <address>dns</address> > > <ssh> > > <authentication_method>publickey</authentication_method> > > </ssh> > > <cluster id="cluster-uuid"/> > > </host> > > --- > > > > you can also use the ovirt-engine-sdk-python package: > > --- > > import ovirtsdk.api > > import ovirtsdk.xml > > > > sdk = ovirtsdk.api.API( > > url='https://host/ovirt-engine/api', > > username='admin@internal', > > password='password', > > insecure=True, > > ) > > sdk.hosts.add( > > ovirtsdk.xml.params.Host( > > name='host1', > > address='host1', > > cluster=engine_api.clusters.get( > > 'cluster' > > ), > > ssh=self._ovirtsdk_xml.params.SSH( > > authentication_method='publickey', > > ), > > ) > > ) > > --- > > > > [1] http://www.ovirt.org/Features/HostDeployProtocol > > > >> > >> Thanks! > >> - Trey > >> > >> On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl@redhat.com> > >> wrote: > >> > > >> > > >> > ----- Original Message ----- > >> >> From: "Trey Dockendorf" <treydock@gmail.com> > >> >> To: "ybronhei" <ybronhei@redhat.com> > >> >> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" > >> >> <fabiand@redhat.com>, > >> >> "Dan > >> >> Kenigsberg" <danken@redhat.com>, "Itamar > >> >> Heim" <iheim@redhat.com>, "Douglas Landgraf" > >> >> <dougsland@redhat.com>, > >> >> "Alon > >> >> Bar-Lev" <alonbl@redhat.com> > >> >> Sent: Tuesday, August 5, 2014 9:36:24 PM > >> >> Subject: Re: [ovirt-users] Proper way to change and persist > >> >> vdsm > >> >> configuration options > >> >> > >> >> On Tue, Aug 5, 2014 at 12:32 PM, ybronhei > >> >> <ybronhei@redhat.com> > >> >> wrote: > >> >> > Hey, > >> >> > > >> >> > Just noticed something that I forgot about.. > >> >> > before filing new BZ, see in ovirt-host-deploy > >> >> > README.environment > >> >> > [1] > >> >> > the > >> >> > section: > >> >> > VDSM/configOverride(bool) [True] > >> >> > Override vdsm configuration file. > >> >> > > >> >> > changing it to false will keep your vdsm.conf file as is > >> >> > after > >> >> > deploying > >> >> > the > >> >> > host again (what happens after node upgrade) > >> >> > > >> >> > [1] > >> >> > https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment > >> >> > > >> >> > please check if that what you meant.. > >> >> > > >> >> > Thanks, > >> >> > Yaniv Bronhaim. > >> >> > > >> >> > >> >> I was unaware of that package. I will check that out as that > >> >> seems > >> >> to > >> >> be what I am looking for. > >> >> > >> >> I have not filed this in BZ and will hold off pending > >> >> ovirt-host-deploy. If you feel a BZ is still necessary then > >> >> please > >> >> do > >> >> file one and I would be happy to provide input if it would > >> >> help. > >> >> > >> >> Right now this is my workflow. > >> >> > >> >> 1. Foreman provisions bare-metal server with CentOS 6.5 > >> >> 2. Once provisioned and system rebooted Puppet applies > >> >> puppet-ovirt > >> >> [1] module that adds the necessary yum repos > >> > > >> > and should stop here.. > >> > > >> >> , and installs packages. > >> >> Part of my Puppet deployment is basic things like sudo > >> >> management > >> >> (vdsm's sudo is account for), sssd configuration, and other > >> >> aspects > >> >> that are needed by every system in my infrastructure. Part of > >> >> the > >> >> ovirt::node Puppet class is managing vdsm.conf, and in my case > >> >> that > >> >> means ensuring iSER is enabled for iSCSI over IB. > >> > > >> > you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf > >> > --- > >> > VDSM_CONFIG/section/key=str:content > >> > --- > >> > > >> > this will create a proper vdsm.conf when host-deploy is > >> > initiated. > >> > > >> > you should now use the rest api to initiate host-deploy. > >> > > >> >> 3. Once host is online and has had the full Puppet catalog > >> >> applied I > >> >> log into ovirt-engine web interface and add those host > >> >> (pulling > >> >> it's > >> >> data via the Foreman provider). > >> > > >> > right, but you should let this process install packages and > >> > manage > >> > configuration. > >> > > >> >> What I've noticed is that after step #3, after a host is added > >> >> by > >> >> ovirt-engine, the vdsm.conf file is reset to default and I > >> >> have > >> >> to > >> >> reapply Puppet before it can be used as the one of my Data > >> >> Storage > >> >> Domains requires iSER (not available over TCP). > >> > > >> > right, see above. > >> > > >> >> What would be the workflow using ovirt-host-deploy? Thus far > >> >> I've > >> >> had > >> >> to piece together my workflow based on the documentation and > >> >> filling > >> >> in blanks where possible since I do require customizations to > >> >> vdsm.conf and the documented workflow of adding a host via web > >> >> UI > >> >> does > >> >> not allow for such customization. > >> >> > >> >> Thanks, > >> >> - Trey > >> >> > >> >> [1] - https://github.com/treydock/puppet-ovirt (README not > >> >> fully > >> >> updated as still working out how to use Puppet with oVirt) > >> >> > >> >> > > >> >> > On 08/05/2014 08:12 AM, Trey Dockendorf wrote: > >> >> >> > >> >> >> I'll file BZ. As far as I can recall this has been an > >> >> >> issue > >> >> >> since > >> >> >> 3.3.x > >> >> >> as > >> >> >> I have been using Puppet to modify values and have had to > >> >> >> rerun > >> >> >> Puppet > >> >> >> after installing a node via GUI and when performing update > >> >> >> from > >> >> >> GUI. > >> >> >> Given > >> >> >> that it has occurred when VDSM version didn't change on the > >> >> >> node > >> >> >> it > >> >> >> seems > >> >> >> likely to be something being done by Python code that > >> >> >> bootstraps > >> >> >> a > >> >> >> node > >> >> >> and > >> >> >> performs the other tasks. I won't have any systems > >> >> >> available > >> >> >> to > >> >> >> test > >> >> >> with > >> >> >> for a few days. New hardware specifically for our oVirt > >> >> >> deployment > >> >> >> is > >> >> >> on > >> >> >> order so should be able to more thoroughly debug and > >> >> >> capture > >> >> >> logs > >> >> >> at > >> >> >> that > >> >> >> time. > >> >> >> > >> >> >> Would using vdsm-reg be a better solution for adding new > >> >> >> nodes? > >> >> >> I > >> >> >> only > >> >> >> tried using vdsm-reg once and it went very poorly...lots of > >> >> >> missing > >> >> >> dependencies not pulled in from yum install I had to > >> >> >> install > >> >> >> manually > >> >> >> via > >> >> >> yum. Then the node was auto added to newest cluster with > >> >> >> no > >> >> >> ability > >> >> >> to > >> >> >> change the cluster. Be happy to debug that too if there's > >> >> >> some > >> >> >> docs > >> >> >> that > >> >> >> outline the expected behavior. > >> >> >> > >> >> >> Using vdsm-reg or something similar seems like a better fit > >> >> >> for > >> >> >> puppet > >> >> >> deployed nodes, as opposed to requiring GUI steps to add > >> >> >> the > >> >> >> node. > >> >> >> > >> >> >> Thanks > >> >> >> - Trey > >> >> >> On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> > >> >> >> wrote: > >> >> >> > >> >> >>> On 07/31/2014 01:28 AM, Trey Dockendorf wrote: > >> >> >>> > >> >> >>>> I'm running ovirt nodes that are stock CentOS 6.5 systems > >> >> >>>> with > >> >> >>>> VDSM > >> >> >>>> installed. I am using iSER to do iSCSI over RDMA and to > >> >> >>>> make > >> >> >>>> that > >> >> >>>> work I have to modify /etc/vdsm/vdsm.conf to include the > >> >> >>>> following: > >> >> >>>> > >> >> >>>> [irs] > >> >> >>>> iscsi_default_ifaces = iser,default > >> >> >>>> > >> >> >>>> I've noticed that any time I upgrade a node from the > >> >> >>>> engine > >> >> >>>> web > >> >> >>>> interface that changes to vdsm.conf are wiped out. I > >> >> >>>> don't > >> >> >>>> know > >> >> >>>> if > >> >> >>>> this is being done by the configuration code or by the > >> >> >>>> vdsm > >> >> >>>> package. > >> >> >>>> Is there a more reliable way to ensure changes to > >> >> >>>> vdsm.conf > >> >> >>>> are > >> >> >>>> NOT > >> >> >>>> removed automatically? > >> >> >>>> > >> >> >>> > >> >> >>> Hey, > >> >> >>> > >> >> >>> vdsm.conf shouldn't wiped out and shouldn't changed at all > >> >> >>> during > >> >> >>> upgrade. > >> >> >>> other related conf files (such as libvirtd.conf) might be > >> >> >>> overrided > >> >> >>> to > >> >> >>> keep > >> >> >>> defaults configurations for vdsm. but vdsm.conf should > >> >> >>> persist > >> >> >>> with > >> >> >>> user's > >> >> >>> modification. from my check, regular yum upgrade doesn't > >> >> >>> touch > >> >> >>> vdsm.conf > >> >> >>> > >> >> >>> Douglas can you verify that with node upgrade? might be > >> >> >>> specific > >> >> >>> to > >> >> >>> that > >> >> >>> flow.. > >> >> >>> > >> >> >>> Trey, can file a bugzilla on that and describe your steps > >> >> >>> there? > >> >> >>> > >> >> >>> Thanks > >> >> >>> > >> >> >>> Yaniv Bronhaim, > >> >> >>> > >> >> >>>> > >> >> >>>> Thanks, > >> >> >>>> - Trey > >> >> >>>> _______________________________________________ > >> >> >>>> Users mailing list > >> >> >>>> Users@ovirt.org > >> >> >>>> http://lists.ovirt.org/mailman/listinfo/users > >> >> >>>> > >> >> >>>> > >> >> >>> > >> >> >>> -- > >> >> >>> Yaniv Bronhaim. > >> >> >>> > >> >> >> > >> >> > > >> >> > >> >

Is there a method that works in EL6? $ openssl x509 -in /etc/pki/ovirt-engine/certs/engine.cer -noout -pubkey | ssh-keygen -i -m PKCS8 -f /dev/stdin ssh-keygen: illegal option -- m $ openssl x509 -in /etc/pki/ovirt-engine/certs/engine.cer -noout -pubkey | ssh-keygen -i -f /dev/stdin buffer_get_string_ret: bad string length 813826338 key_from_blob: can't read key type decode blob failed. I achieved somewhat similar result by doing the following, though likely is a security issue having something like Facter read from /etc/pki/ovirt-engine/keys $ ssh-keygen -y -f /etc/pki/ovirt-engine/keys/engine_id_rsa ssh-rsa <PUBKEY> Thanks, - Trey On Thu, Aug 21, 2014 at 1:44 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Thursday, August 21, 2014 9:41:03 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Sorry, I meant the SSH public key. Is that a file or in the database? I did a "grep" for the public key downloaded via the command=get-ssh-trust and found no files in /etc/ or /var/lib/ovirt-engine that matched.
openssl x509 -in /etc/pki/ovirt-engine/certs/engine.cer -noout -pubkey | ssh-keygen -i -m PKCS8 -f /dev/stdin
- Trey
On Thu, Aug 21, 2014 at 11:33 AM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Thursday, August 21, 2014 7:15:56 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
I likely won't automate this yet, as a lot of what's coming in 3.5 seems to obsolete many things I was doing previously via Puppet. In particular the Foreman integration and the ability to add custom iptables rules to engine-config. Previous posts on the list made is seem like modifying "IPTables" could potentially make upgrades less reliable.
Created a gist of a working series of commands based on Alon's example using the Host Deploy Protocol [1].
https://gist.github.com/treydock/570a776b5c160bca7c9c
Curious , where is the public key used by the ovirt-engine stored? The one that is available using command=get-ssh-trust. Is there a way to query it from the engine? I'm thinking if it would be possible to create a custom Facter face that stores the value of that public key so easier to re-use and access for deployment.
/etc/pki/ovirt-engine/certs/engine.cer
Thanks, - Trey
[1] - http://www.ovirt.org/Features/HostDeployProtocol
On Tue, Aug 5, 2014 at 11:32 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Tuesday, August 5, 2014 11:27:45 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Thanks for clarifying, makes sense now.
The public key trust needed for registration, is that the same key that would be used when adding host via UI?
yes.
you can download it via: $ curl 'http://engine/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY' $ curl 'http://engine/ovirt-engine/services/host-register?version=1&command=get-ssh-trust'
probably better to use https and verify CA certificate fingerprint if you do that from host.
Any examples of how to use the HostDeployProtocol [1]? I like the idea of using registration but haven't the slightest idea how to implement what's described in the docs [1]. I do recall seeing an article posted (searching email and can't find) that had a nice walk-through of how to use the oVirt API using browser tools. I'm unsure if this HostDeployProtocol would be done that way or via some other method.
there are two apis, the formal rest-api that is exposed by the engine and can be accessed using any rest api tool or ovirt-engine-cli, ovirt-engine-sdk-java, ovirt-engine-sdk-python wrappers. I sent you a minimal example in previous message.
and the host-deploy protocol[1], which should have been exposed in the rest-api, but for some reason I cannot understand it was not included in the public interface of the engine.
the advantage of using the rest-api is that you can achieve full cycle using the protocol, the add host cycle is what you seek.
the host-deploy protocol just register the host, but the sysadmin needs to approve the host via the ui (or via the rest api) before it is usable.
Thanks, - Trey
[1] http://www.ovirt.org/Features/HostDeployProtocol
On Tue, Aug 5, 2014 at 3:01 PM, Alon Bar-Lev <alonbl@redhat.com> wrote: > > > ----- Original Message ----- >> From: "Trey Dockendorf" <treydock@gmail.com> >> To: "Alon Bar-Lev" <alonbl@redhat.com> >> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, >> "Fabian >> Deutsch" <fabiand@redhat.com>, "Dan >> Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, >> "Douglas Landgraf" <dougsland@redhat.com>, "Oved >> Ourfali" <ovedo@redhat.com> >> Sent: Tuesday, August 5, 2014 10:45:12 PM >> Subject: Re: [ovirt-users] Proper way to change and persist vdsm >> configuration options >> >> Excellent, so installing 'ovirt-host-deploy' on each node then >> configuring the /etc/ovirt-host-deploy.conf.d files seems very >> automate-able, will see how it works in practice. > > you do not need to install the ovirt-host-deploy, just create the > files. > >> Regarding the actual host registration and getting the host added to >> ovirt-engine, are there other methods besides the API and the sdk? >> Would it be possible to configure the necessary >> ovirt-host-deploy.conf.d files then execute "ovirt-host-deploy"? I >> notice that running 'ovirt-host-deploy' wants to make whatever host >> executes it a ovir hypervisor but haven't yet run it all the way >> through as no server to test with at this time. There seems to be >> no >> "--help" or similar command line argument. > > you should not run host-deploy directly, but via the engine's > process, > either registration or add host as I replied previously. > > when base system is ready, you issue add host via api of engine or > via > ui, > the other alternative is to register the host host the host-deploy > protocol, and approve the host via api of engine or via ui. > >> I'm sure this will all be more clear once I attempt the steps and >> run >> through the motions. Will try to find a system to test on so I'm >> ready once our new servers arrive. >> >> Thanks, >> - Trey >> >> On Tue, Aug 5, 2014 at 2:23 PM, Alon Bar-Lev <alonbl@redhat.com> >> wrote: >> > >> > >> > ----- Original Message ----- >> >> From: "Trey Dockendorf" <treydock@gmail.com> >> >> To: "Alon Bar-Lev" <alonbl@redhat.com> >> >> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, >> >> "Fabian >> >> Deutsch" <fabiand@redhat.com>, "Dan >> >> Kenigsberg" <danken@redhat.com>, "Itamar Heim" >> >> <iheim@redhat.com>, >> >> "Douglas Landgraf" <dougsland@redhat.com> >> >> Sent: Tuesday, August 5, 2014 10:01:14 PM >> >> Subject: Re: [ovirt-users] Proper way to change and persist vdsm >> >> configuration options >> >> >> >> Ah, thank you for the input! Just so I'm not spending time >> >> implementing the wrong changes, let me confirm I understand your >> >> comments. >> >> >> >> 1) Deploy host with Foreman >> >> 2) Apply Puppet catalog including ovirt Puppet module >> >> 3) Initiate host-deploy via rest API >> >> >> >> In the ovirt module the following takes place: >> >> >> >> 2a) Add yum repos >> >> 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf >> >> >> > >> > you can have any # of files with any prefix :)) >> > >> >> For #2b I have a few questions >> >> >> >> * The name of the ".conf" file is simply for sorting and >> >> labeling/organization, it has not functional impact on what those >> >> overrides apply to? >> > >> > right. >> > >> >> * That file is managed on the ovirt-engine server, not the actual >> >> nodes? >> > >> > currently on the host, in future we will provide a method to add >> > this >> > to >> > engine database[1] >> > >> > [1] http://gerrit.ovirt.org/#/c/27064/ >> > >> >> * Is there any way to apply overrides to specific hosts? For >> >> example >> >> if I have some hosts that require a config and others that don't, >> >> how >> >> would I separate those *.conf files? This is more theoretical as >> >> right now my setup is common across all nodes. >> > >> > the poppet module can put whatever required on each host. >> > >> >> For #3...the implementation of API calls from within Puppet is a >> >> challenge and one I can't tackle yet, but definitely will make it >> >> a >> >> goal for the future. In the mean time, what's the "manual" way >> >> to >> >> initiate host-deploy? Is there a CLI command that would have the >> >> same >> >> result as an API call or is the recommended way to perform the >> >> API >> >> call manually (ie curl)? >> > >> > well, you can register host using the following protocol[1], but >> > it >> > is >> > difficult to do this securely, what you actually need is to >> > establish >> > ssh >> > trust for root with engine key then register. >> > >> > you can also use the register command using curl by something like >> > (I >> > have >> > not checked): >> > https://admin%40internal:password@engine/ovirt-engine/api/hosts >> > --- >> > <?xml version="1.0" encoding="UTF-8" standalone="yes"?> >> > <host> >> > <name>host1</name> >> > <address>dns</address> >> > <ssh> >> > <authentication_method>publickey</authentication_method> >> > </ssh> >> > <cluster id="cluster-uuid"/> >> > </host> >> > --- >> > >> > you can also use the ovirt-engine-sdk-python package: >> > --- >> > import ovirtsdk.api >> > import ovirtsdk.xml >> > >> > sdk = ovirtsdk.api.API( >> > url='https://host/ovirt-engine/api', >> > username='admin@internal', >> > password='password', >> > insecure=True, >> > ) >> > sdk.hosts.add( >> > ovirtsdk.xml.params.Host( >> > name='host1', >> > address='host1', >> > cluster=engine_api.clusters.get( >> > 'cluster' >> > ), >> > ssh=self._ovirtsdk_xml.params.SSH( >> > authentication_method='publickey', >> > ), >> > ) >> > ) >> > --- >> > >> > [1] http://www.ovirt.org/Features/HostDeployProtocol >> > >> >> >> >> Thanks! >> >> - Trey >> >> >> >> On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev <alonbl@redhat.com> >> >> wrote: >> >> > >> >> > >> >> > ----- Original Message ----- >> >> >> From: "Trey Dockendorf" <treydock@gmail.com> >> >> >> To: "ybronhei" <ybronhei@redhat.com> >> >> >> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" >> >> >> <fabiand@redhat.com>, >> >> >> "Dan >> >> >> Kenigsberg" <danken@redhat.com>, "Itamar >> >> >> Heim" <iheim@redhat.com>, "Douglas Landgraf" >> >> >> <dougsland@redhat.com>, >> >> >> "Alon >> >> >> Bar-Lev" <alonbl@redhat.com> >> >> >> Sent: Tuesday, August 5, 2014 9:36:24 PM >> >> >> Subject: Re: [ovirt-users] Proper way to change and persist >> >> >> vdsm >> >> >> configuration options >> >> >> >> >> >> On Tue, Aug 5, 2014 at 12:32 PM, ybronhei >> >> >> <ybronhei@redhat.com> >> >> >> wrote: >> >> >> > Hey, >> >> >> > >> >> >> > Just noticed something that I forgot about.. >> >> >> > before filing new BZ, see in ovirt-host-deploy >> >> >> > README.environment >> >> >> > [1] >> >> >> > the >> >> >> > section: >> >> >> > VDSM/configOverride(bool) [True] >> >> >> > Override vdsm configuration file. >> >> >> > >> >> >> > changing it to false will keep your vdsm.conf file as is >> >> >> > after >> >> >> > deploying >> >> >> > the >> >> >> > host again (what happens after node upgrade) >> >> >> > >> >> >> > [1] >> >> >> > https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment >> >> >> > >> >> >> > please check if that what you meant.. >> >> >> > >> >> >> > Thanks, >> >> >> > Yaniv Bronhaim. >> >> >> > >> >> >> >> >> >> I was unaware of that package. I will check that out as that >> >> >> seems >> >> >> to >> >> >> be what I am looking for. >> >> >> >> >> >> I have not filed this in BZ and will hold off pending >> >> >> ovirt-host-deploy. If you feel a BZ is still necessary then >> >> >> please >> >> >> do >> >> >> file one and I would be happy to provide input if it would >> >> >> help. >> >> >> >> >> >> Right now this is my workflow. >> >> >> >> >> >> 1. Foreman provisions bare-metal server with CentOS 6.5 >> >> >> 2. Once provisioned and system rebooted Puppet applies >> >> >> puppet-ovirt >> >> >> [1] module that adds the necessary yum repos >> >> > >> >> > and should stop here.. >> >> > >> >> >> , and installs packages. >> >> >> Part of my Puppet deployment is basic things like sudo >> >> >> management >> >> >> (vdsm's sudo is account for), sssd configuration, and other >> >> >> aspects >> >> >> that are needed by every system in my infrastructure. Part of >> >> >> the >> >> >> ovirt::node Puppet class is managing vdsm.conf, and in my case >> >> >> that >> >> >> means ensuring iSER is enabled for iSCSI over IB. >> >> > >> >> > you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf >> >> > --- >> >> > VDSM_CONFIG/section/key=str:content >> >> > --- >> >> > >> >> > this will create a proper vdsm.conf when host-deploy is >> >> > initiated. >> >> > >> >> > you should now use the rest api to initiate host-deploy. >> >> > >> >> >> 3. Once host is online and has had the full Puppet catalog >> >> >> applied I >> >> >> log into ovirt-engine web interface and add those host >> >> >> (pulling >> >> >> it's >> >> >> data via the Foreman provider). >> >> > >> >> > right, but you should let this process install packages and >> >> > manage >> >> > configuration. >> >> > >> >> >> What I've noticed is that after step #3, after a host is added >> >> >> by >> >> >> ovirt-engine, the vdsm.conf file is reset to default and I >> >> >> have >> >> >> to >> >> >> reapply Puppet before it can be used as the one of my Data >> >> >> Storage >> >> >> Domains requires iSER (not available over TCP). >> >> > >> >> > right, see above. >> >> > >> >> >> What would be the workflow using ovirt-host-deploy? Thus far >> >> >> I've >> >> >> had >> >> >> to piece together my workflow based on the documentation and >> >> >> filling >> >> >> in blanks where possible since I do require customizations to >> >> >> vdsm.conf and the documented workflow of adding a host via web >> >> >> UI >> >> >> does >> >> >> not allow for such customization. >> >> >> >> >> >> Thanks, >> >> >> - Trey >> >> >> >> >> >> [1] - https://github.com/treydock/puppet-ovirt (README not >> >> >> fully >> >> >> updated as still working out how to use Puppet with oVirt) >> >> >> >> >> >> > >> >> >> > On 08/05/2014 08:12 AM, Trey Dockendorf wrote: >> >> >> >> >> >> >> >> I'll file BZ. As far as I can recall this has been an >> >> >> >> issue >> >> >> >> since >> >> >> >> 3.3.x >> >> >> >> as >> >> >> >> I have been using Puppet to modify values and have had to >> >> >> >> rerun >> >> >> >> Puppet >> >> >> >> after installing a node via GUI and when performing update >> >> >> >> from >> >> >> >> GUI. >> >> >> >> Given >> >> >> >> that it has occurred when VDSM version didn't change on the >> >> >> >> node >> >> >> >> it >> >> >> >> seems >> >> >> >> likely to be something being done by Python code that >> >> >> >> bootstraps >> >> >> >> a >> >> >> >> node >> >> >> >> and >> >> >> >> performs the other tasks. I won't have any systems >> >> >> >> available >> >> >> >> to >> >> >> >> test >> >> >> >> with >> >> >> >> for a few days. New hardware specifically for our oVirt >> >> >> >> deployment >> >> >> >> is >> >> >> >> on >> >> >> >> order so should be able to more thoroughly debug and >> >> >> >> capture >> >> >> >> logs >> >> >> >> at >> >> >> >> that >> >> >> >> time. >> >> >> >> >> >> >> >> Would using vdsm-reg be a better solution for adding new >> >> >> >> nodes? >> >> >> >> I >> >> >> >> only >> >> >> >> tried using vdsm-reg once and it went very poorly...lots of >> >> >> >> missing >> >> >> >> dependencies not pulled in from yum install I had to >> >> >> >> install >> >> >> >> manually >> >> >> >> via >> >> >> >> yum. Then the node was auto added to newest cluster with >> >> >> >> no >> >> >> >> ability >> >> >> >> to >> >> >> >> change the cluster. Be happy to debug that too if there's >> >> >> >> some >> >> >> >> docs >> >> >> >> that >> >> >> >> outline the expected behavior. >> >> >> >> >> >> >> >> Using vdsm-reg or something similar seems like a better fit >> >> >> >> for >> >> >> >> puppet >> >> >> >> deployed nodes, as opposed to requiring GUI steps to add >> >> >> >> the >> >> >> >> node. >> >> >> >> >> >> >> >> Thanks >> >> >> >> - Trey >> >> >> >> On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> >> >> >> >> wrote: >> >> >> >> >> >> >> >>> On 07/31/2014 01:28 AM, Trey Dockendorf wrote: >> >> >> >>> >> >> >> >>>> I'm running ovirt nodes that are stock CentOS 6.5 systems >> >> >> >>>> with >> >> >> >>>> VDSM >> >> >> >>>> installed. I am using iSER to do iSCSI over RDMA and to >> >> >> >>>> make >> >> >> >>>> that >> >> >> >>>> work I have to modify /etc/vdsm/vdsm.conf to include the >> >> >> >>>> following: >> >> >> >>>> >> >> >> >>>> [irs] >> >> >> >>>> iscsi_default_ifaces = iser,default >> >> >> >>>> >> >> >> >>>> I've noticed that any time I upgrade a node from the >> >> >> >>>> engine >> >> >> >>>> web >> >> >> >>>> interface that changes to vdsm.conf are wiped out. I >> >> >> >>>> don't >> >> >> >>>> know >> >> >> >>>> if >> >> >> >>>> this is being done by the configuration code or by the >> >> >> >>>> vdsm >> >> >> >>>> package. >> >> >> >>>> Is there a more reliable way to ensure changes to >> >> >> >>>> vdsm.conf >> >> >> >>>> are >> >> >> >>>> NOT >> >> >> >>>> removed automatically? >> >> >> >>>> >> >> >> >>> >> >> >> >>> Hey, >> >> >> >>> >> >> >> >>> vdsm.conf shouldn't wiped out and shouldn't changed at all >> >> >> >>> during >> >> >> >>> upgrade. >> >> >> >>> other related conf files (such as libvirtd.conf) might be >> >> >> >>> overrided >> >> >> >>> to >> >> >> >>> keep >> >> >> >>> defaults configurations for vdsm. but vdsm.conf should >> >> >> >>> persist >> >> >> >>> with >> >> >> >>> user's >> >> >> >>> modification. from my check, regular yum upgrade doesn't >> >> >> >>> touch >> >> >> >>> vdsm.conf >> >> >> >>> >> >> >> >>> Douglas can you verify that with node upgrade? might be >> >> >> >>> specific >> >> >> >>> to >> >> >> >>> that >> >> >> >>> flow.. >> >> >> >>> >> >> >> >>> Trey, can file a bugzilla on that and describe your steps >> >> >> >>> there? >> >> >> >>> >> >> >> >>> Thanks >> >> >> >>> >> >> >> >>> Yaniv Bronhaim, >> >> >> >>> >> >> >> >>>> >> >> >> >>>> Thanks, >> >> >> >>>> - Trey >> >> >> >>>> _______________________________________________ >> >> >> >>>> Users mailing list >> >> >> >>>> Users@ovirt.org >> >> >> >>>> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >>>> >> >> >> >>>> >> >> >> >>> >> >> >> >>> -- >> >> >> >>> Yaniv Bronhaim. >> >> >> >>> >> >> >> >> >> >> >> > >> >> >> >> >> >>

----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Thursday, August 21, 2014 10:55:28 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Is there a method that works in EL6?
$ openssl x509 -in /etc/pki/ovirt-engine/certs/engine.cer -noout -pubkey | ssh-keygen -i -m PKCS8 -f /dev/stdin ssh-keygen: illegal option -- m
$ openssl x509 -in /etc/pki/ovirt-engine/certs/engine.cer -noout -pubkey | ssh-keygen -i -f /dev/stdin buffer_get_string_ret: bad string length 813826338 key_from_blob: can't read key type decode blob failed.
I achieved somewhat similar result by doing the following, though likely is a security issue having something like Facter read from /etc/pki/ovirt-engine/keys
$ ssh-keygen -y -f /etc/pki/ovirt-engine/keys/engine_id_rsa ssh-rsa <PUBKEY>
this is one option that use private key and unsupported artifact... better: openssl pkcs12 -in /etc/pki/ovirt-engine/keys/engine.p12 -nocerts -passin pass:mypass -nodes 2> /dev/null | ssh-keygen -y -f /dev/stdin I hope el6 will have newer openssh with support for the PKCS8 format.
Thanks, - Trey
On Thu, Aug 21, 2014 at 1:44 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Thursday, August 21, 2014 9:41:03 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
Sorry, I meant the SSH public key. Is that a file or in the database? I did a "grep" for the public key downloaded via the command=get-ssh-trust and found no files in /etc/ or /var/lib/ovirt-engine that matched.
openssl x509 -in /etc/pki/ovirt-engine/certs/engine.cer -noout -pubkey | ssh-keygen -i -m PKCS8 -f /dev/stdin
- Trey
On Thu, Aug 21, 2014 at 11:33 AM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Oved Ourfali" <ovedo@redhat.com> Sent: Thursday, August 21, 2014 7:15:56 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
I likely won't automate this yet, as a lot of what's coming in 3.5 seems to obsolete many things I was doing previously via Puppet. In particular the Foreman integration and the ability to add custom iptables rules to engine-config. Previous posts on the list made is seem like modifying "IPTables" could potentially make upgrades less reliable.
Created a gist of a working series of commands based on Alon's example using the Host Deploy Protocol [1].
https://gist.github.com/treydock/570a776b5c160bca7c9c
Curious , where is the public key used by the ovirt-engine stored? The one that is available using command=get-ssh-trust. Is there a way to query it from the engine? I'm thinking if it would be possible to create a custom Facter face that stores the value of that public key so easier to re-use and access for deployment.
/etc/pki/ovirt-engine/certs/engine.cer
Thanks, - Trey
[1] - http://www.ovirt.org/Features/HostDeployProtocol
On Tue, Aug 5, 2014 at 11:32 PM, Alon Bar-Lev <alonbl@redhat.com> wrote:
----- Original Message ----- > From: "Trey Dockendorf" <treydock@gmail.com> > To: "Alon Bar-Lev" <alonbl@redhat.com> > Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, > "Fabian > Deutsch" <fabiand@redhat.com>, "Dan > Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, > "Douglas Landgraf" <dougsland@redhat.com>, "Oved > Ourfali" <ovedo@redhat.com> > Sent: Tuesday, August 5, 2014 11:27:45 PM > Subject: Re: [ovirt-users] Proper way to change and persist vdsm > configuration options > > Thanks for clarifying, makes sense now. > > The public key trust needed for registration, is that the same key > that would be used when adding host via UI?
yes.
you can download it via: $ curl 'http://engine/ovirt-engine/services/pki-resource?resource=engine-certificate&format=OPENSSH-PUBKEY' $ curl 'http://engine/ovirt-engine/services/host-register?version=1&command=get-ssh-trust'
probably better to use https and verify CA certificate fingerprint if you do that from host.
> Any examples of how to use the HostDeployProtocol [1]? I like the > idea of using registration but haven't the slightest idea how to > implement what's described in the docs [1]. I do recall seeing an > article posted (searching email and can't find) that had a nice > walk-through of how to use the oVirt API using browser tools. I'm > unsure if this HostDeployProtocol would be done that way or via some > other method.
there are two apis, the formal rest-api that is exposed by the engine and can be accessed using any rest api tool or ovirt-engine-cli, ovirt-engine-sdk-java, ovirt-engine-sdk-python wrappers. I sent you a minimal example in previous message.
and the host-deploy protocol[1], which should have been exposed in the rest-api, but for some reason I cannot understand it was not included in the public interface of the engine.
the advantage of using the rest-api is that you can achieve full cycle using the protocol, the add host cycle is what you seek.
the host-deploy protocol just register the host, but the sysadmin needs to approve the host via the ui (or via the rest api) before it is usable.
> > Thanks, > - Trey > > > [1] http://www.ovirt.org/Features/HostDeployProtocol > > On Tue, Aug 5, 2014 at 3:01 PM, Alon Bar-Lev <alonbl@redhat.com> > wrote: > > > > > > ----- Original Message ----- > >> From: "Trey Dockendorf" <treydock@gmail.com> > >> To: "Alon Bar-Lev" <alonbl@redhat.com> > >> Cc: "ybronhei" <ybronhei@redhat.com>, "users" <users@ovirt.org>, > >> "Fabian > >> Deutsch" <fabiand@redhat.com>, "Dan > >> Kenigsberg" <danken@redhat.com>, "Itamar Heim" > >> <iheim@redhat.com>, > >> "Douglas Landgraf" <dougsland@redhat.com>, "Oved > >> Ourfali" <ovedo@redhat.com> > >> Sent: Tuesday, August 5, 2014 10:45:12 PM > >> Subject: Re: [ovirt-users] Proper way to change and persist vdsm > >> configuration options > >> > >> Excellent, so installing 'ovirt-host-deploy' on each node then > >> configuring the /etc/ovirt-host-deploy.conf.d files seems very > >> automate-able, will see how it works in practice. > > > > you do not need to install the ovirt-host-deploy, just create the > > files. > > > >> Regarding the actual host registration and getting the host added > >> to > >> ovirt-engine, are there other methods besides the API and the > >> sdk? > >> Would it be possible to configure the necessary > >> ovirt-host-deploy.conf.d files then execute "ovirt-host-deploy"? > >> I > >> notice that running 'ovirt-host-deploy' wants to make whatever > >> host > >> executes it a ovir hypervisor but haven't yet run it all the way > >> through as no server to test with at this time. There seems to > >> be > >> no > >> "--help" or similar command line argument. > > > > you should not run host-deploy directly, but via the engine's > > process, > > either registration or add host as I replied previously. > > > > when base system is ready, you issue add host via api of engine or > > via > > ui, > > the other alternative is to register the host host the host-deploy > > protocol, and approve the host via api of engine or via ui. > > > >> I'm sure this will all be more clear once I attempt the steps and > >> run > >> through the motions. Will try to find a system to test on so I'm > >> ready once our new servers arrive. > >> > >> Thanks, > >> - Trey > >> > >> On Tue, Aug 5, 2014 at 2:23 PM, Alon Bar-Lev <alonbl@redhat.com> > >> wrote: > >> > > >> > > >> > ----- Original Message ----- > >> >> From: "Trey Dockendorf" <treydock@gmail.com> > >> >> To: "Alon Bar-Lev" <alonbl@redhat.com> > >> >> Cc: "ybronhei" <ybronhei@redhat.com>, "users" > >> >> <users@ovirt.org>, > >> >> "Fabian > >> >> Deutsch" <fabiand@redhat.com>, "Dan > >> >> Kenigsberg" <danken@redhat.com>, "Itamar Heim" > >> >> <iheim@redhat.com>, > >> >> "Douglas Landgraf" <dougsland@redhat.com> > >> >> Sent: Tuesday, August 5, 2014 10:01:14 PM > >> >> Subject: Re: [ovirt-users] Proper way to change and persist > >> >> vdsm > >> >> configuration options > >> >> > >> >> Ah, thank you for the input! Just so I'm not spending time > >> >> implementing the wrong changes, let me confirm I understand > >> >> your > >> >> comments. > >> >> > >> >> 1) Deploy host with Foreman > >> >> 2) Apply Puppet catalog including ovirt Puppet module > >> >> 3) Initiate host-deploy via rest API > >> >> > >> >> In the ovirt module the following takes place: > >> >> > >> >> 2a) Add yum repos > >> >> 2b) Manage /etc/ovirt-host-deploy.conf.d/40-xxx.conf > >> >> > >> > > >> > you can have any # of files with any prefix :)) > >> > > >> >> For #2b I have a few questions > >> >> > >> >> * The name of the ".conf" file is simply for sorting and > >> >> labeling/organization, it has not functional impact on what > >> >> those > >> >> overrides apply to? > >> > > >> > right. > >> > > >> >> * That file is managed on the ovirt-engine server, not the > >> >> actual > >> >> nodes? > >> > > >> > currently on the host, in future we will provide a method to > >> > add > >> > this > >> > to > >> > engine database[1] > >> > > >> > [1] http://gerrit.ovirt.org/#/c/27064/ > >> > > >> >> * Is there any way to apply overrides to specific hosts? For > >> >> example > >> >> if I have some hosts that require a config and others that > >> >> don't, > >> >> how > >> >> would I separate those *.conf files? This is more theoretical > >> >> as > >> >> right now my setup is common across all nodes. > >> > > >> > the poppet module can put whatever required on each host. > >> > > >> >> For #3...the implementation of API calls from within Puppet is > >> >> a > >> >> challenge and one I can't tackle yet, but definitely will make > >> >> it > >> >> a > >> >> goal for the future. In the mean time, what's the "manual" > >> >> way > >> >> to > >> >> initiate host-deploy? Is there a CLI command that would have > >> >> the > >> >> same > >> >> result as an API call or is the recommended way to perform the > >> >> API > >> >> call manually (ie curl)? > >> > > >> > well, you can register host using the following protocol[1], > >> > but > >> > it > >> > is > >> > difficult to do this securely, what you actually need is to > >> > establish > >> > ssh > >> > trust for root with engine key then register. > >> > > >> > you can also use the register command using curl by something > >> > like > >> > (I > >> > have > >> > not checked): > >> > https://admin%40internal:password@engine/ovirt-engine/api/hosts > >> > --- > >> > <?xml version="1.0" encoding="UTF-8" standalone="yes"?> > >> > <host> > >> > <name>host1</name> > >> > <address>dns</address> > >> > <ssh> > >> > <authentication_method>publickey</authentication_method> > >> > </ssh> > >> > <cluster id="cluster-uuid"/> > >> > </host> > >> > --- > >> > > >> > you can also use the ovirt-engine-sdk-python package: > >> > --- > >> > import ovirtsdk.api > >> > import ovirtsdk.xml > >> > > >> > sdk = ovirtsdk.api.API( > >> > url='https://host/ovirt-engine/api', > >> > username='admin@internal', > >> > password='password', > >> > insecure=True, > >> > ) > >> > sdk.hosts.add( > >> > ovirtsdk.xml.params.Host( > >> > name='host1', > >> > address='host1', > >> > cluster=engine_api.clusters.get( > >> > 'cluster' > >> > ), > >> > ssh=self._ovirtsdk_xml.params.SSH( > >> > authentication_method='publickey', > >> > ), > >> > ) > >> > ) > >> > --- > >> > > >> > [1] http://www.ovirt.org/Features/HostDeployProtocol > >> > > >> >> > >> >> Thanks! > >> >> - Trey > >> >> > >> >> On Tue, Aug 5, 2014 at 1:45 PM, Alon Bar-Lev > >> >> <alonbl@redhat.com> > >> >> wrote: > >> >> > > >> >> > > >> >> > ----- Original Message ----- > >> >> >> From: "Trey Dockendorf" <treydock@gmail.com> > >> >> >> To: "ybronhei" <ybronhei@redhat.com> > >> >> >> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" > >> >> >> <fabiand@redhat.com>, > >> >> >> "Dan > >> >> >> Kenigsberg" <danken@redhat.com>, "Itamar > >> >> >> Heim" <iheim@redhat.com>, "Douglas Landgraf" > >> >> >> <dougsland@redhat.com>, > >> >> >> "Alon > >> >> >> Bar-Lev" <alonbl@redhat.com> > >> >> >> Sent: Tuesday, August 5, 2014 9:36:24 PM > >> >> >> Subject: Re: [ovirt-users] Proper way to change and persist > >> >> >> vdsm > >> >> >> configuration options > >> >> >> > >> >> >> On Tue, Aug 5, 2014 at 12:32 PM, ybronhei > >> >> >> <ybronhei@redhat.com> > >> >> >> wrote: > >> >> >> > Hey, > >> >> >> > > >> >> >> > Just noticed something that I forgot about.. > >> >> >> > before filing new BZ, see in ovirt-host-deploy > >> >> >> > README.environment > >> >> >> > [1] > >> >> >> > the > >> >> >> > section: > >> >> >> > VDSM/configOverride(bool) [True] > >> >> >> > Override vdsm configuration file. > >> >> >> > > >> >> >> > changing it to false will keep your vdsm.conf file as is > >> >> >> > after > >> >> >> > deploying > >> >> >> > the > >> >> >> > host again (what happens after node upgrade) > >> >> >> > > >> >> >> > [1] > >> >> >> > https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment > >> >> >> > > >> >> >> > please check if that what you meant.. > >> >> >> > > >> >> >> > Thanks, > >> >> >> > Yaniv Bronhaim. > >> >> >> > > >> >> >> > >> >> >> I was unaware of that package. I will check that out as > >> >> >> that > >> >> >> seems > >> >> >> to > >> >> >> be what I am looking for. > >> >> >> > >> >> >> I have not filed this in BZ and will hold off pending > >> >> >> ovirt-host-deploy. If you feel a BZ is still necessary > >> >> >> then > >> >> >> please > >> >> >> do > >> >> >> file one and I would be happy to provide input if it would > >> >> >> help. > >> >> >> > >> >> >> Right now this is my workflow. > >> >> >> > >> >> >> 1. Foreman provisions bare-metal server with CentOS 6.5 > >> >> >> 2. Once provisioned and system rebooted Puppet applies > >> >> >> puppet-ovirt > >> >> >> [1] module that adds the necessary yum repos > >> >> > > >> >> > and should stop here.. > >> >> > > >> >> >> , and installs packages. > >> >> >> Part of my Puppet deployment is basic things like sudo > >> >> >> management > >> >> >> (vdsm's sudo is account for), sssd configuration, and other > >> >> >> aspects > >> >> >> that are needed by every system in my infrastructure. Part > >> >> >> of > >> >> >> the > >> >> >> ovirt::node Puppet class is managing vdsm.conf, and in my > >> >> >> case > >> >> >> that > >> >> >> means ensuring iSER is enabled for iSCSI over IB. > >> >> > > >> >> > you can create a file > >> >> > /etc/ovirt-host-deploy.conf.d/40-xxx.conf > >> >> > --- > >> >> > VDSM_CONFIG/section/key=str:content > >> >> > --- > >> >> > > >> >> > this will create a proper vdsm.conf when host-deploy is > >> >> > initiated. > >> >> > > >> >> > you should now use the rest api to initiate host-deploy. > >> >> > > >> >> >> 3. Once host is online and has had the full Puppet catalog > >> >> >> applied I > >> >> >> log into ovirt-engine web interface and add those host > >> >> >> (pulling > >> >> >> it's > >> >> >> data via the Foreman provider). > >> >> > > >> >> > right, but you should let this process install packages and > >> >> > manage > >> >> > configuration. > >> >> > > >> >> >> What I've noticed is that after step #3, after a host is > >> >> >> added > >> >> >> by > >> >> >> ovirt-engine, the vdsm.conf file is reset to default and I > >> >> >> have > >> >> >> to > >> >> >> reapply Puppet before it can be used as the one of my Data > >> >> >> Storage > >> >> >> Domains requires iSER (not available over TCP). > >> >> > > >> >> > right, see above. > >> >> > > >> >> >> What would be the workflow using ovirt-host-deploy? Thus > >> >> >> far > >> >> >> I've > >> >> >> had > >> >> >> to piece together my workflow based on the documentation > >> >> >> and > >> >> >> filling > >> >> >> in blanks where possible since I do require customizations > >> >> >> to > >> >> >> vdsm.conf and the documented workflow of adding a host via > >> >> >> web > >> >> >> UI > >> >> >> does > >> >> >> not allow for such customization. > >> >> >> > >> >> >> Thanks, > >> >> >> - Trey > >> >> >> > >> >> >> [1] - https://github.com/treydock/puppet-ovirt (README not > >> >> >> fully > >> >> >> updated as still working out how to use Puppet with oVirt) > >> >> >> > >> >> >> > > >> >> >> > On 08/05/2014 08:12 AM, Trey Dockendorf wrote: > >> >> >> >> > >> >> >> >> I'll file BZ. As far as I can recall this has been an > >> >> >> >> issue > >> >> >> >> since > >> >> >> >> 3.3.x > >> >> >> >> as > >> >> >> >> I have been using Puppet to modify values and have had > >> >> >> >> to > >> >> >> >> rerun > >> >> >> >> Puppet > >> >> >> >> after installing a node via GUI and when performing > >> >> >> >> update > >> >> >> >> from > >> >> >> >> GUI. > >> >> >> >> Given > >> >> >> >> that it has occurred when VDSM version didn't change on > >> >> >> >> the > >> >> >> >> node > >> >> >> >> it > >> >> >> >> seems > >> >> >> >> likely to be something being done by Python code that > >> >> >> >> bootstraps > >> >> >> >> a > >> >> >> >> node > >> >> >> >> and > >> >> >> >> performs the other tasks. I won't have any systems > >> >> >> >> available > >> >> >> >> to > >> >> >> >> test > >> >> >> >> with > >> >> >> >> for a few days. New hardware specifically for our oVirt > >> >> >> >> deployment > >> >> >> >> is > >> >> >> >> on > >> >> >> >> order so should be able to more thoroughly debug and > >> >> >> >> capture > >> >> >> >> logs > >> >> >> >> at > >> >> >> >> that > >> >> >> >> time. > >> >> >> >> > >> >> >> >> Would using vdsm-reg be a better solution for adding new > >> >> >> >> nodes? > >> >> >> >> I > >> >> >> >> only > >> >> >> >> tried using vdsm-reg once and it went very poorly...lots > >> >> >> >> of > >> >> >> >> missing > >> >> >> >> dependencies not pulled in from yum install I had to > >> >> >> >> install > >> >> >> >> manually > >> >> >> >> via > >> >> >> >> yum. Then the node was auto added to newest cluster > >> >> >> >> with > >> >> >> >> no > >> >> >> >> ability > >> >> >> >> to > >> >> >> >> change the cluster. Be happy to debug that too if > >> >> >> >> there's > >> >> >> >> some > >> >> >> >> docs > >> >> >> >> that > >> >> >> >> outline the expected behavior. > >> >> >> >> > >> >> >> >> Using vdsm-reg or something similar seems like a better > >> >> >> >> fit > >> >> >> >> for > >> >> >> >> puppet > >> >> >> >> deployed nodes, as opposed to requiring GUI steps to add > >> >> >> >> the > >> >> >> >> node. > >> >> >> >> > >> >> >> >> Thanks > >> >> >> >> - Trey > >> >> >> >> On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> > >> >> >> >> wrote: > >> >> >> >> > >> >> >> >>> On 07/31/2014 01:28 AM, Trey Dockendorf wrote: > >> >> >> >>> > >> >> >> >>>> I'm running ovirt nodes that are stock CentOS 6.5 > >> >> >> >>>> systems > >> >> >> >>>> with > >> >> >> >>>> VDSM > >> >> >> >>>> installed. I am using iSER to do iSCSI over RDMA and > >> >> >> >>>> to > >> >> >> >>>> make > >> >> >> >>>> that > >> >> >> >>>> work I have to modify /etc/vdsm/vdsm.conf to include > >> >> >> >>>> the > >> >> >> >>>> following: > >> >> >> >>>> > >> >> >> >>>> [irs] > >> >> >> >>>> iscsi_default_ifaces = iser,default > >> >> >> >>>> > >> >> >> >>>> I've noticed that any time I upgrade a node from the > >> >> >> >>>> engine > >> >> >> >>>> web > >> >> >> >>>> interface that changes to vdsm.conf are wiped out. I > >> >> >> >>>> don't > >> >> >> >>>> know > >> >> >> >>>> if > >> >> >> >>>> this is being done by the configuration code or by the > >> >> >> >>>> vdsm > >> >> >> >>>> package. > >> >> >> >>>> Is there a more reliable way to ensure changes to > >> >> >> >>>> vdsm.conf > >> >> >> >>>> are > >> >> >> >>>> NOT > >> >> >> >>>> removed automatically? > >> >> >> >>>> > >> >> >> >>> > >> >> >> >>> Hey, > >> >> >> >>> > >> >> >> >>> vdsm.conf shouldn't wiped out and shouldn't changed at > >> >> >> >>> all > >> >> >> >>> during > >> >> >> >>> upgrade. > >> >> >> >>> other related conf files (such as libvirtd.conf) might > >> >> >> >>> be > >> >> >> >>> overrided > >> >> >> >>> to > >> >> >> >>> keep > >> >> >> >>> defaults configurations for vdsm. but vdsm.conf should > >> >> >> >>> persist > >> >> >> >>> with > >> >> >> >>> user's > >> >> >> >>> modification. from my check, regular yum upgrade > >> >> >> >>> doesn't > >> >> >> >>> touch > >> >> >> >>> vdsm.conf > >> >> >> >>> > >> >> >> >>> Douglas can you verify that with node upgrade? might be > >> >> >> >>> specific > >> >> >> >>> to > >> >> >> >>> that > >> >> >> >>> flow.. > >> >> >> >>> > >> >> >> >>> Trey, can file a bugzilla on that and describe your > >> >> >> >>> steps > >> >> >> >>> there? > >> >> >> >>> > >> >> >> >>> Thanks > >> >> >> >>> > >> >> >> >>> Yaniv Bronhaim, > >> >> >> >>> > >> >> >> >>>> > >> >> >> >>>> Thanks, > >> >> >> >>>> - Trey > >> >> >> >>>> _______________________________________________ > >> >> >> >>>> Users mailing list > >> >> >> >>>> Users@ovirt.org > >> >> >> >>>> http://lists.ovirt.org/mailman/listinfo/users > >> >> >> >>>> > >> >> >> >>>> > >> >> >> >>> > >> >> >> >>> -- > >> >> >> >>> Yaniv Bronhaim. > >> >> >> >>> > >> >> >> >> > >> >> >> > > >> >> >> > >> >> > >> >

afaik el6 and openssh support pkcs#8 for ages. at least I have no issue using pkcs#8 formatted keys with el6 hosts. Am 21.08.2014 22:05, schrieb Alon Bar-Lev:
I hope el6 will have newer openssh with support for the PKCS8 format.
-- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

----- Original Message -----
From: "Sven Kieske" <S.Kieske@mittwald.de> To: users@ovirt.org Sent: Friday, August 22, 2014 10:32:05 AM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
afaik el6 and openssh support pkcs#8 for ages. at least I have no issue using pkcs#8 formatted keys with el6 hosts.
the ssh-keygen does not.
Am 21.08.2014 22:05, schrieb Alon Bar-Lev:
I hope el6 will have newer openssh with support for the PKCS8 format.
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

well yeah, it does not generate pkcs#8 by default but you can easily convert existing keys via openssl: openssl pkcs8 -topk8 -v2 des3 \ -in test_rsa_key.old -passin 'pass:super secret passphrase' \ -out test_rsa_key -passout 'pass:super secret passphrase' see this page for more details: http://martin.kleppmann.com/2013/05/24/improving-security-of-ssh-private-key... newer ssh-keygen versions use PBKDF2 by default and not MD5 anymore. HTH Am 22.08.2014 10:51, schrieb Alon Bar-Lev:
the ssh-keygen does not.
-- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

you are hijacking this thread... but anyway... please refer to the original question, how to easily convert X.509 certificate to SSH public key. the best method should avoid using the private key. newer ssh-keygen supports exactly that. ----- Original Message -----
From: "Sven Kieske" <S.Kieske@mittwald.de> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: users@ovirt.org Sent: Friday, August 22, 2014 1:24:17 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
well yeah, it does not generate pkcs#8 by default but you can easily convert existing keys via openssl:
openssl pkcs8 -topk8 -v2 des3 \ -in test_rsa_key.old -passin 'pass:super secret passphrase' \ -out test_rsa_key -passout 'pass:super secret passphrase' see this page for more details: http://martin.kleppmann.com/2013/05/24/improving-security-of-ssh-private-key...
newer ssh-keygen versions use PBKDF2 by default and not MD5 anymore.
HTH
Am 22.08.2014 10:51, schrieb Alon Bar-Lev:
the ssh-keygen does not.
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

For what it's worth I managed to get the ovirt-engine's public key from engine.cer using Ruby and turn it into a Puppet fact. Had to borrow some code from https://github.com/bensie/sshkey https://github.com/treydock/puppet-ovirt/blob/1.x/lib/facter/ovirt_engine_ss... Thanks for all the help Alon, now have semi-automated deployment of nodes :). Once 3.5 is released and the Foreman integration is in place, it will be much nicer. Thanks, - Trey On Fri, Aug 22, 2014 at 5:30 AM, Alon Bar-Lev <alonbl@redhat.com> wrote:
you are hijacking this thread... but anyway... please refer to the original question, how to easily convert X.509 certificate to SSH public key. the best method should avoid using the private key. newer ssh-keygen supports exactly that.
----- Original Message -----
From: "Sven Kieske" <S.Kieske@mittwald.de> To: "Alon Bar-Lev" <alonbl@redhat.com> Cc: users@ovirt.org Sent: Friday, August 22, 2014 1:24:17 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
well yeah, it does not generate pkcs#8 by default but you can easily convert existing keys via openssl:
openssl pkcs8 -topk8 -v2 des3 \ -in test_rsa_key.old -passin 'pass:super secret passphrase' \ -out test_rsa_key -passout 'pass:super secret passphrase' see this page for more details: http://martin.kleppmann.com/2013/05/24/improving-security-of-ssh-private-key...
newer ssh-keygen versions use PBKDF2 by default and not MD5 anymore.
HTH
Am 22.08.2014 10:51, schrieb Alon Bar-Lev:
the ssh-keygen does not.
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, sorry for digging up an old tread, but I have a problem with proposed way to preserve vdsm.conf between host redeployments I've created a file /etc/ovirt-host-deploy.conf.d/migration-bw.conf on ovirt-engine with contents: [environment:enforce] VDSM_CONFIG/vars/migration_max_bandwidth=str:300 Then i've tried to add a new host via GUI - it was added, but with default vdsm.conf, i.e. without my custom option "migration_max_bandwidth" Also tried to restart ovirt-engine and reinstalling host - same result, no custom options, only default vdsm.conf content. Am i missing something? Freshly installed ovirt 3.5-pre, centos 6.5 as engine, centos7 as hosts Yuriy Demchenko On 08/05/2014 10:45 PM, Alon Bar-Lev wrote:
----- Original Message -----
From: "Trey Dockendorf" <treydock@gmail.com> To: "ybronhei" <ybronhei@redhat.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fabiand@redhat.com>, "Dan Kenigsberg" <danken@redhat.com>, "Itamar Heim" <iheim@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com>, "Alon Bar-Lev" <alonbl@redhat.com> Sent: Tuesday, August 5, 2014 9:36:24 PM Subject: Re: [ovirt-users] Proper way to change and persist vdsm configuration options
On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> wrote:
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
I was unaware of that package. I will check that out as that seems to be what I am looking for.
I have not filed this in BZ and will hold off pending ovirt-host-deploy. If you feel a BZ is still necessary then please do file one and I would be happy to provide input if it would help.
Right now this is my workflow.
1. Foreman provisions bare-metal server with CentOS 6.5 2. Once provisioned and system rebooted Puppet applies puppet-ovirt [1] module that adds the necessary yum repos and should stop here..
, and installs packages. Part of my Puppet deployment is basic things like sudo management (vdsm's sudo is account for), sssd configuration, and other aspects that are needed by every system in my infrastructure. Part of the ovirt::node Puppet class is managing vdsm.conf, and in my case that means ensuring iSER is enabled for iSCSI over IB. you can create a file /etc/ovirt-host-deploy.conf.d/40-xxx.conf
VDSM_CONFIG/section/key=str:content ---
this will create a proper vdsm.conf when host-deploy is initiated.
you should now use the rest api to initiate host-deploy.
3. Once host is online and has had the full Puppet catalog applied I log into ovirt-engine web interface and add those host (pulling it's data via the Foreman provider). right, but you should let this process install packages and manage configuration.
What I've noticed is that after step #3, after a host is added by ovirt-engine, the vdsm.conf file is reset to default and I have to reapply Puppet before it can be used as the one of my Data Storage Domains requires iSER (not available over TCP). right, see above.
What would be the workflow using ovirt-host-deploy? Thus far I've had to piece together my workflow based on the documentation and filling in blanks where possible since I do require customizations to vdsm.conf and the documented workflow of adding a host via web UI does not allow for such customization.
Thanks, - Trey
[1] - https://github.com/treydock/puppet-ovirt (README not fully updated as still working out how to use Puppet with oVirt)
On 08/05/2014 08:12 AM, Trey Dockendorf wrote:
I'll file BZ. As far as I can recall this has been an issue since 3.3.x as I have been using Puppet to modify values and have had to rerun Puppet after installing a node via GUI and when performing update from GUI. Given that it has occurred when VDSM version didn't change on the node it seems likely to be something being done by Python code that bootstraps a node and performs the other tasks. I won't have any systems available to test with for a few days. New hardware specifically for our oVirt deployment is on order so should be able to more thoroughly debug and capture logs at that time.
Would using vdsm-reg be a better solution for adding new nodes? I only tried using vdsm-reg once and it went very poorly...lots of missing dependencies not pulled in from yum install I had to install manually via yum. Then the node was auto added to newest cluster with no ability to change the cluster. Be happy to debug that too if there's some docs that outline the expected behavior.
Using vdsm-reg or something similar seems like a better fit for puppet deployed nodes, as opposed to requiring GUI steps to add the node.
Thanks - Trey On Aug 4, 2014 5:53 AM, "ybronhei" <ybronhei@redhat.com> wrote:
On 07/31/2014 01:28 AM, Trey Dockendorf wrote:
I'm running ovirt nodes that are stock CentOS 6.5 systems with VDSM installed. I am using iSER to do iSCSI over RDMA and to make that work I have to modify /etc/vdsm/vdsm.conf to include the following:
[irs] iscsi_default_ifaces = iser,default
I've noticed that any time I upgrade a node from the engine web interface that changes to vdsm.conf are wiped out. I don't know if this is being done by the configuration code or by the vdsm package. Is there a more reliable way to ensure changes to vdsm.conf are NOT removed automatically?
Hey,
vdsm.conf shouldn't wiped out and shouldn't changed at all during upgrade. other related conf files (such as libvirtd.conf) might be overrided to keep defaults configurations for vdsm. but vdsm.conf should persist with user's modification. from my check, regular yum upgrade doesn't touch vdsm.conf
Douglas can you verify that with node upgrade? might be specific to that flow..
Trey, can file a bugzilla on that and describe your steps there?
Thanks
Yaniv Bronhaim,
Thanks, - Trey _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Yaniv Bronhaim.
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 08/05/2014 09:36 PM, Trey Dockendorf wrote:
On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> wrote:
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
I was unaware of that package. I will check that out as that seems to be what I am looking for.
I have not filed this in BZ and will hold off pending ovirt-host-deploy. If you feel a BZ is still necessary then please do file one and I would be happy to provide input if it would help.
Right now this is my workflow.
1. Foreman provisions bare-metal server with CentOS 6.5 2. Once provisioned and system rebooted Puppet applies puppet-ovirt [1] module that adds the necessary yum repos, and installs packages. Part of my Puppet deployment is basic things like sudo management (vdsm's sudo is account for), sssd configuration, and other aspects that are needed by every system in my infrastructure. Part of the ovirt::node Puppet class is managing vdsm.conf, and in my case that means ensuring iSER is enabled for iSCSI over IB. 3. Once host is online and has had the full Puppet catalog applied I log into ovirt-engine web interface and add those host (pulling it's data via the Foreman provider).
just wondering (and i may have missed this in the thread) - if you want bare metal provisioning and foreman, why not just use the new 3.5 integration doing that: http://www.ovirt.org/Features/AdvancedForemanIntegration it will call foreman, do bare metal provision with the hostgroup you chose, then foreman will call add host in the engine on your behalf? (doesn't limit you from either extending ovirt-host-deploy plugins, or using more puppet modules via the hostgroup).

On Tue, Aug 5, 2014 at 3:31 PM, Itamar Heim <iheim@redhat.com> wrote:
On 08/05/2014 09:36 PM, Trey Dockendorf wrote:
On Tue, Aug 5, 2014 at 12:32 PM, ybronhei <ybronhei@redhat.com> wrote:
Hey,
Just noticed something that I forgot about.. before filing new BZ, see in ovirt-host-deploy README.environment [1] the section: VDSM/configOverride(bool) [True] Override vdsm configuration file.
changing it to false will keep your vdsm.conf file as is after deploying the host again (what happens after node upgrade)
[1] https://github.com/oVirt/ovirt-host-deploy/blob/master/README.environment
please check if that what you meant..
Thanks, Yaniv Bronhaim.
I was unaware of that package. I will check that out as that seems to be what I am looking for.
I have not filed this in BZ and will hold off pending ovirt-host-deploy. If you feel a BZ is still necessary then please do file one and I would be happy to provide input if it would help.
Right now this is my workflow.
1. Foreman provisions bare-metal server with CentOS 6.5 2. Once provisioned and system rebooted Puppet applies puppet-ovirt [1] module that adds the necessary yum repos, and installs packages. Part of my Puppet deployment is basic things like sudo management (vdsm's sudo is account for), sssd configuration, and other aspects that are needed by every system in my infrastructure. Part of the ovirt::node Puppet class is managing vdsm.conf, and in my case that means ensuring iSER is enabled for iSCSI over IB. 3. Once host is online and has had the full Puppet catalog applied I log into ovirt-engine web interface and add those host (pulling it's data via the Foreman provider).
just wondering (and i may have missed this in the thread) - if you want bare metal provisioning and foreman, why not just use the new 3.5 integration doing that: http://www.ovirt.org/Features/AdvancedForemanIntegration
it will call foreman, do bare metal provision with the hostgroup you chose, then foreman will call add host in the engine on your behalf?
(doesn't limit you from either extending ovirt-host-deploy plugins, or using more puppet modules via the hostgroup).
I was actually looking forward to using those features but haven't yet had a chance to test 3.5 as I have no spare hardware at the moment. The Foreman Integration is definitely on my radar as we rely very heavily on Foreman. Thanks, - Trey
participants (8)
-
Alon Bar-Lev
-
Itamar Heim
-
Joop
-
Sven Kieske
-
Trey Dockendorf
-
ybronhei
-
Yedidyah Bar David
-
Yuriy Demchenko