[Users] BSTRAP component='CreateConf' status='FAIL' message='Basic configuration failed to import default values'

Karli Sjöberg Karli.Sjoberg at slu.se
Tue Jul 3 21:11:39 UTC 2012


________________________________________
Från: Itamar Heim [iheim at redhat.com]
Skickat: den 29 juni 2012 19:54
Till: Karli Sjöberg
Kopia: Doron Fediuck; users at oVirt.org
Ämne: Re: [Users] BSTRAP component='CreateConf' status='FAIL' message='Basic configuration failed to import default values'

On 06/29/2012 02:42 PM, Karli Sjöberg wrote:
>
> 29 jun 2012 kl. 13.16 skrev Itamar Heim:
>
>> On 06/29/2012 07:11 AM, Karli Sjöberg wrote:
>>>
>>> 29 jun 2012 kl. 12.54 skrev Itamar Heim:
>>>
>>>> On 06/29/2012 03:27 AM, Karli Sjöberg wrote:
>>>>>
>>>>> 28 jun 2012 kl. 22.11 skrev Doron Fediuck:
>>>>>
>>>>>> Please rm /tmp/*py* in your host.
>>>>>> Older scripts may be interrupting.
>>>>>
>>>>> # rm -f /tmp/*py*
>>>>> # rm -f /tmp/*log*
>>>>> # rm -f /tmp/*tmp*
>>>>>
>>>>>
>>>>> "Re-Install"
>>>>>
>>>>>
>>>>> /var/log/ovirt-engine/engine.log:
>>>>>
>>>> ...
>>>>
>>>>>
>>>>>
>>>>> OK, one step forward and one step back. It seems as if the host
>>>>> installation went through, but after reboot, I have no contact with the
>>>>> web user interface. It thinks it´s active:
>>>>> # systemctl is-active ovirt-engine.service
>>>>> active
>>>>>
>>>>> But I don´t even get to the initial portal chooser-page.
>>>>
>>>> did you try to add the host the engine is running on or another physical
>>>> machine?
>>>
>>> The same. As I stated in the beginning, I want to be able to have the
>>> engine configured to execute power management for the rest of the hosts
>>> in the cluster, as per:
>>>
>>> Source: http://lists.ovirt.org/pipermail/users/2012-February/000361.html
>>> "Yes, the ovirt backend does not shut down or power up any hosts
>>> directly, it can work only through vdsm. Therefore you need one running
>>> host per datacenter to be able to manage the rest of the hosts."
>>>
>>> Adding the engine as a host in the cluster seems to be the way to
>>> achieve that.
>>
>> 1. power management is DC wide, not cluster.
>> 2. you don't want the engine to be a host, unless you use the all-in-one
>> plugin, which makes sure the node isn't rebooted while it is being added.
>> 3. it will still won't help your problem, since it is local storage.
>> 4. *if* you use the engine as a node (which may cause various issues),
>> you need to make sure no VMs will run on it and it won't be an SPM
>> (setting spm priority of 0 or -1 from the api)
>> 5. you should probably also add the host from the api, using the
>> no-reboot flag.
>
> Let me explain a little to make you better understand my reasoning
> behind this configuration.
>
> We work with segmented, separated networks. One network for public
> access, one for storage traffic, one for management and so on. That
> means that if the nodes themselves have to do their own
> power-management, the nodes would require three interfaces each, and the
> metal we are using for hosts just don´t have that. But if we can use the
> backend to do that, the hosts would only require two interfaces, which
> most 1U servers are equipped with as standard (plus one
> iLO/IPMI/whatev), so we can use them as hosts without issue. Then the
> backend has one extra interface that it can use to communicate over the
> power management network to the respective service processor with.
>
> Is there a "better" way to achieve what we are aiming for?

so there is no routing between the ovirtmgmt network and the iLO/IPMI
network?
what i am missing is why it will be ok for the engine to connect to all
hosts iLO but all other hosts can't?


I can't think of any reason why, but it sounds like you do. What's on your mind?


>
> 2. Then I would like to use that. Where can I find a all-in-one plugin
> that fits into the install?
> 3. I may have misunderstood, but I will be using NFS shared storage.
> 4. Precisely what I was thinking. Thanks for the SPM-hint!
> 5. OK, sure. Is it still necessary to have the all-in-one plugin
> installed in that case?

no - i just mentioned it is a use case in which we do install for simple
POC's where the engine and vdsm are on the same host, but it is limited
to using local storage of that host. it gives easy out of the box ramp
up, adds the host, creates the DC, storage domain, etc. - all as part of
the installation).
(and it is an rpm just like the rest of ovirt rpms in same repo)


I have made some headway, managed to add the host into it's own cluster using ovirt-engine-cli executing:
# create host --address "xxx.xxx.xxx.xxx" --cluster-id "99408929-82cf-4dc7-a532-9d998063fa95" --name "Default" --root_password "Password" --reboot_after_installation "0"

But I haven't had much luck setting SPM -1. Would you mind showing me the exact syntax, please?


>
> I would like to say that I´m loving oVirt this far, it is truly an
> amazing piece of software. Thank you all, and thank you for your help
> and your knowledge!

thanks - glad to hear.



More information about the Users mailing list