[ovirt-users] Building ovirt-host-deploy gives `configure: error: otopi-devtools required but missing`
Yedidyah Bar David
didi at redhat.com
Mon May 15 10:20:44 UTC 2017
On Wed, May 10, 2017 at 5:37 PM, Leni Kadali Mutungi
<lenikmutungi at gmail.com> wrote:
> Sorry for the silence. I've been doing some tinkering. I did what
> Jason did i.e. `./configure --enable-java-sdk --with-maven
> --with-otopi-sources=/home/user/otopi` since that's where I had
> checked out otopi. ./configure is able to locate where the
> otopi-bundle file is (/usr/share/otopi directory for me) with the
> options above. If you try to use the options `--otopi-bundle` or
> `--with-otopi-bundle`, you'll get a unrecognized option from
> ./configure.
>
> I then ran `sudo ovirt-host-deploy/src/bin/ovirt-host-deploy` at my
> command prompt and got the same error as that which Jason got.
>>>>> # ./src/bin/ovirt-host-deploy
>>>>> [ INFO ] Stage: Initializing
>>>>> Continuing will configure this host for serving as
>>>>> hypervisor. Are you sure you want to continue? (yes/no) yes
>>>>> [ INFO ] Stage: Environment setup
>>>>> Configuration files: []
>>>>> Log file: /tmp/ovirt-host-deploy-20170425170102-6mdsx6.log
>>>>> Version: otopi-1.7.0_master ()
>>>>> Version: ovirt-host-deploy-1.7.0_master ()
>>>>> [ INFO ] Stage: Environment packages setup
>>>>> [ ERROR ] Failed to execute stage 'Environment packages setup':
>>>>> Packager install not implemented
>>>>> [ INFO ] Stage: Pre-termination
>>>>> [ INFO ] Stage: Termination
>>>>>
> Difference being that I wasn't doing the compilation as root, I was
> using sudo for elevated privileges. Looking at the log file, I saw the
> following errors:
> 2017-05-07 07:21:37,011+0300 DEBUG
> otopi.plugins.otopi.packagers.yumpackager yumpackager._boot:184 Cannot
> initialize miniyum
> Traceback (most recent call last):
> File "/home/user/otopi/src/bin/../plugins/otopi/packagers/yumpackager.py",
> line 176, in _boot
> self._refreshMiniyum()
> File "/home/user/otopi/src/bin/../plugins/otopi/packagers/yumpackager.py",
> line 134, in _refreshMiniyum
> constants.PackEnv.YUM_ENABLED_PLUGINS
> File "/home/user/otopi/src/bin/../plugins/otopi/packagers/yumpackager.py",
> line 61, in _getMiniYum
> from otopi import miniyum
> File "/home/user/otopi/src/otopi/miniyum.py", line 17, in <module>
> import rpmUtils.miscutils
>
> Traceback (most recent call last):
> File "/home/user/otopi/src/otopi/context.py", line 132, in _executeMethod
> method['method']()
> File "/home/user/ovirt-host-deploy/src/bin/../plugins/ovirt-host-deploy/vdsm/vdsmid.py",
> line 84, in _packages
> self.packager.install(('dmidecode',))
> File "/home/user/otopi/src/otopi/packager.py", line 98, in install
> raise NotImplementedError(_('Packager install not implemented'))
> NotImplementedError: Packager install not implemented
>
> So as a workaround, I used the instructions in the README to create a
> file called /etc/ovirt-host-deploy.conf.d/50-offline-packager.conf to
> suppress this reaction.
> Contains the following instructions:
>
> [environment:init]
> ODEPLOY/offlinePackager=bool:True
> PACKAGER/yumpackagerEnabled=bool:False
>
> This allowed it to run further than it did for Jason (and myself when
> I tried it out). The new error message I got was:
> user at localhost:~$ sudo ovirt-host-deploy/src/bin/ovirt-host-deploy
> [sudo] password for user:
> [ INFO ] Stage: Initializing
> Continuing will configure this host for serving as
> hypervisor. Are you sure you want to continue? (yes/no) y
> [ INFO ] Stage: Environment setup
> Configuration files:
> ['/etc/ovirt-host-deploy.conf.d/50-fakevmstats.conf',
> '/etc/ovirt-host-deploy.conf.d/50-faqemu.conf',
> '/etc/ovirt-host-deploy.conf.d/50-offline-packager.conf']
> Log file: /tmp/ovirt-host-deploy-20170509143602-zo5v24.log
> Version: otopi-1.7.0_master ()
> Version: ovirt-host-deploy-1.7.0_master ()
> [ INFO ] Stage: Environment packages setup
> [ INFO ] Stage: Programs detection
> [ INFO ] Stage: Environment customization
> [ INFO ] Kdump unsupported
> [ INFO ] Stage: Setup validation
> [WARNING] Cannot validate host name settings, reason: resolved host
> does not match any of the local addresses
> [WARNING] Grubby not present - not setting kernel arguments.
> [ ERROR ] Failed to execute stage 'Setup validation': 'VIRT/enable'
> [ INFO ] Stage: Pre-termination
> [ INFO ] Stage: Termination
>
> Looking at the log file showed the following errors:
>
> 2017-05-09 14:36:02,309+0300 DEBUG otopi.context
> context._executeMethod:128 Stage boot METHOD
> otopi.plugins.otopi.packagers.yumpackager.Plugin._boot
> 2017-05-09 14:36:02,316+0300 DEBUG
> otopi.plugins.otopi.packagers.yumpackager yumpackager._boot:184 Cannot
> initialize miniyum
> Traceback (most recent call last):
> File "/home/herabus/otopi/src/bin/../plugins/otopi/packagers/yumpackager.py",
> line 176, in _boot
> self._refreshMiniyum()
> File "/home/herabus/otopi/src/bin/../plugins/otopi/packagers/yumpackager.py",
> line 134, in _refreshMiniyum
> constants.PackEnv.YUM_ENABLED_PLUGINS
> File "/home/herabus/otopi/src/bin/../plugins/otopi/packagers/yumpackager.py",
> line 61, in _getMiniYum
> from otopi import miniyum
> File "/home/herabus/otopi/src/otopi/miniyum.py", line 17, in <module>
> import rpmUtils.miscutils
> ImportError: No module named rpmUtils.miscutils
> 2017-05-09 14:36:04,550+0300 DEBUG
> otopi.plugins.otopi.network.hostname plugin.execute:921
> execute-output: ('/bin/ip', 'addr', 'show') stdout:
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: enp3s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state
> DOWN group default qlen 1000
> link/ether 4c:72:b9:6c:e5:f9 brd ff:ff:ff:ff:ff:ff
> 3: wlp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state
> DOWN group default qlen 1000
> link/ether ba:4a:1b:2f:98:ca brd ff:ff:ff:ff:ff:ff
> 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
> state DOWN group default
> link/ether 02:42:61:40:ef:1d brd ff:ff:ff:ff:ff:ff
> inet 172.17.0.1/16 scope global docker0
> valid_lft forever preferred_lft forever
>
> 2017-05-09 14:36:04,550+0300 DEBUG
> otopi.plugins.otopi.network.hostname plugin.execute:926
> execute-output: ('/bin/ip', 'addr', 'show') stderr:
>
> 2017-05-09 14:36:04,551+0300 DEBUG
> otopi.plugins.otopi.network.hostname hostname._validation:100 my
> addresses: ['127.0.1.1', '127.0.1.1', '127.0.1.1']
> 2017-05-09 14:36:04,551+0300 DEBUG
> otopi.plugins.otopi.network.hostname hostname._validation:101 local
> addresses: [u'172.17.0.1']
> 2017-05-09 14:36:04,552+0300 WARNING
> otopi.plugins.otopi.network.hostname hostname._validation:106 Cannot
> validate host name settings, reason: resolved host does not match any
> of the local addresses
You can ignore this warning for now and get to it later on.
> Traceback (most recent call last):
> File "/home/herabus/otopi/src/otopi/context.py", line 132, in _executeMethod
> method['method']()
> File "/home/herabus/ovirt-host-deploy/src/bin/../plugins/ovirt-host-deploy/tune/tuned.py",
> line 75, in _validation
> if self.environment[odeploycons.VirtEnv.ENABLE]:
> KeyError: 'VIRT/enable'
Now is the time to explain something.
ovirt-host-deploy is not designed to be ran manually like you try.
You will get the exact same error if you try to do this on CentOS or Fedora.
The normal way it works is that the engine "bundles" it in a tarball,
copies it to the target host using ssh, untars it there and runs it.
It then talks with it - the engine sends some stuff, host-deploy replies, etc.
The protocol they use is described in otopi, in the file README.dialog.
otopi has (currently) two "dialects" - "human" (default) and "machine".
The engine and ovirt-host-deploy talk using the machine dialog.
To make ovirt-host-deploy talk with you using the machine dialog,
you should run it with:
ovirt-host-deploy DIALOG/dialect=str:machine
To make it let you configure it, run it with:
ovirt-host-deploy DIALOG/dialect=str:machine DIALOG/customization=bool:True
To know what it expects at each stage, I suggest to have a look at an
ovirt-host-deploy log generated on el7 or fedora.
Anyway, congrats for a nice progress!
>
> I tried starting the libvirtd service to see if that would make the
> VIRT/enable error go away or at least satisfy the requirements of
> ovirt-host-deploy, but it didn't seem to work.
If you check such a log file, you'll see there (among other things):
DIALOG:SEND **%QStart: CUSTOMIZATION_COMMAND
DIALOG:SEND ###
DIALOG:SEND ### Customization phase, use 'install' to proceed
DIALOG:SEND ### COMMAND>
DIALOG:SEND **%QHidden: FALSE
DIALOG:SEND ***Q:STRING CUSTOMIZATION_COMMAND
DIALOG:SEND **%QEnd: CUSTOMIZATION_COMMAND
DIALOG:RECEIVE env-query -k VIRT/enable
DIALOG:SEND **%QStart: VIRT/enable
DIALOG:SEND ###
DIALOG:SEND ### Please specify value for 'VIRT/enable':
DIALOG:SEND ### Response is VALUE VIRT/enable=type:value or
ABORT VIRT/enable
DIALOG:SEND ***Q:VALUE VIRT/enable
DIALOG:SEND **%QEnd: VIRT/enable
DIALOG:RECEIVE VALUE VIRT/enable=bool:true
"SEND" is what the host-deploy sends, "RECEIVE" is what the engine
replies.
So host-deploy sent a prompt asking for a customization command,
the engine sent the command 'env-query -k VIRT/enable', host-deploy
then asked the engine to provide a value for 'VIRT/enable', and the
engine replied 'VIRT/enable=bool:true'.
> The other errors seem
> to be related to not having an IP address that ovirt-host-deploy can
> recognize. To package this for Debian, I would need to find the
> equivalent of yumpackager.py for aptitude/apt-get/apt, since it seems
> to be a dependency required by ovirt-host-deploy.
As I said, you can ignore it for now. But IMO this isn't specific
to Debian - search a bit and you'll find other similar cases.
>
> TL;DR: How to enable the virt service and assign an IP address that
> ovirt-host-deploy can use.
> Write/Find a python script that is equivalent to yumpackager.py and
> miniyum.py so that that dependency for ovirt-host-deploy is satisfied
> as well.
Last one will indeed be very interesting, but isn't mandatory for you
to continue, if your stated goal is to have a Debian host managed by
an oVirt engine. You can manually install all your stuff on the host,
and use offlinepackager so that host-deploy will not try to install
stuff for you. You'll then have a harder first-time-install, and
will miss checking for updates etc. For these you'll indeed need to
write something like debpackager, and probably minideb as well -
engine-setup uses it directly, as otopi's packager isn't enough for it.
I'd like to mention another point. As explained above, when the engine
adds a host, it copies to it a bundle (tarfile). This bundle is not
rpm/yum/dnf specific - it should work also on Debian. Normally,
ovirt-host-deploy (the rpm package) is installed only on the engine
machine. So if you do not care about the engine side for now, you
should be able to try adding a Debian host to your engine already -
just configure offline packager. This might be easier to debug then
by manually running ovirt-host-deploy.
Best,
--
Didi
More information about the Users
mailing list