[ovirt-users] Failed deply of ovirt-engine using ovirt-node-ng-installer-ovirt-4.2-pre-2017120512.iso

Roberto Nunin robnunin at gmail.com
Wed Dec 6 16:00:36 UTC 2017


2017-12-06 15:29 GMT+01:00 Simone Tiraboschi <stirabos at redhat.com>:

>
>
> On Wed, Dec 6, 2017 at 2:33 PM, Roberto Nunin <robnunin at gmail.com> wrote:
>
>> It has worked.
>>
>
> Thanks!
>
>
>>
>> Now additional (hopefully final) question.
>>
>> In the blog https://ovirt.org/blog/2017/04/up-and-running-with-ovir
>> t-4.1-and-gluster-storage/ is written that selecting cluster, additional
>> hosts should be visible.
>> These must be imported or detached.
>>
>> I cannot see the additional hosts. Must I add them as new host ?
>>
>
> Yes, please follow this section:
> https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-
> 4.1-and-gluster-storage/#configuring-hosts-two-and-three-for-hosted-engine
>

​Ciao SImone

I've followed the procedure (extracted from the same blog I've mentioned in
my email), but if I add host 2 and 3 instead to import them, no chance to
"see" something when I'm going to add the data domain:

[image: Immagine incorporata 1]​

nothing appear in the drop-down list, while glusterd is running on all
three nodes:


aps-te68-mng.example.com: Status of volume: data
aps-te68-mng.example.com: Gluster process                             TCP
Port  RDMA Port  Online  Pid
aps-te68-mng.example.com:
------------------------------------------------------------------------------
aps-te68-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric
aps-te68-mng.example.com: ks/data/data
49153     0          Y       64531
aps-te68-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric
aps-te68-mng.example.com: ks/data/data
49153     0          Y       65012
aps-te68-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric
aps-te68-mng.example.com: ks/data/data
49155     0          Y       75603
aps-te68-mng.example.com: Self-heal Daemon on localhost               N/A
     N/A        Y       75690
aps-te68-mng.example.com: Self-heal Daemon on aps-te64-mng.example.com N/A
     N/A        Y       75675
aps-te68-mng.example.com: Self-heal Daemon on aps-te61-mng.example.com N/A
     N/A        Y       12170
aps-te68-mng.example.com:
aps-te68-mng.example.com: Task Status of Volume data
aps-te68-mng.example.com:
------------------------------------------------------------------------------
aps-te68-mng.example.com: There are no active volume tasks
aps-te68-mng.example.com:
aps-te61-mng.example.com: Status of volume: data
aps-te61-mng.example.com: Gluster process                             TCP
Port  RDMA Port  Online  Pid
aps-te61-mng.example.com:
------------------------------------------------------------------------------
aps-te61-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric
aps-te61-mng.example.com: ks/data/data
49153     0          Y       64531
aps-te61-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric
aps-te61-mng.example.com: ks/data/data
49153     0          Y       65012
aps-te61-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric
aps-te61-mng.example.com: ks/data/data
49155     0          Y       75603
aps-te61-mng.example.com: Self-heal Daemon on localhost               N/A
     N/A        Y       12170
aps-te61-mng.example.com: Self-heal Daemon on aps-te64-mng.example.com N/A
     N/A        Y       75675
aps-te61-mng.example.com: Self-heal Daemon on aps-te68-mng.example.com N/A
     N/A        Y       75690
aps-te61-mng.example.com:
aps-te61-mng.example.com: Task Status of Volume data
aps-te61-mng.example.com:
------------------------------------------------------------------------------
aps-te61-mng.example.com: There are no active volume tasks
aps-te61-mng.example.com:
aps-te64-mng.example.com: Status of volume: data
aps-te64-mng.example.com: Gluster process                             TCP
Port  RDMA Port  Online  Pid
aps-te64-mng.example.com:
------------------------------------------------------------------------------
aps-te64-mng.example.com: Brick aps-te61-mng.example.com:/gluster_bric
aps-te64-mng.example.com: ks/data/data
49153     0          Y       64531
aps-te64-mng.example.com: Brick aps-te64-mng.example.com:/gluster_bric
aps-te64-mng.example.com: ks/data/data
49153     0          Y       65012
aps-te64-mng.example.com: Brick aps-te68-mng.example.com:/gluster_bric
aps-te64-mng.example.com: ks/data/data
49155     0          Y       75603
aps-te64-mng.example.com: Self-heal Daemon on localhost               N/A
     N/A        Y       75675
aps-te64-mng.example.com: Self-heal Daemon on aps-te68-mng.example.com N/A
     N/A        Y       75690
aps-te64-mng.example.com: Self-heal Daemon on aps-te61-mng.example.com N/A
     N/A        Y       12170
aps-te64-mng.example.com:
aps-te64-mng.example.com: Task Status of Volume data
aps-te64-mng.example.com:
------------------------------------------------------------------------------
aps-te64-mng.example.com: There are no active volume tasks
aps-te64-mng.example.com:


Unfortunately, no way to import host 2 and 3. Searched around all the GUI,
without success.
Did you've any hints to suggest ?


Thanks

>
>
>
>
>> Or this means that the deploy hasn't finished correctly ?
>>
>> Thanks in advance.
>>
>>
>>
>> 2017-12-06 13:47 GMT+01:00 Yedidyah Bar David <didi at redhat.com>:
>>
>>> On Wed, Dec 6, 2017 at 2:29 PM, Simone Tiraboschi <stirabos at redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Wed, Dec 6, 2017 at 1:23 PM, Roberto Nunin <robnunin at gmail.com>
>>>> wrote:
>>>>
>>>>> I've read the Bugzilla report, but after modify answer file, how to
>>>>> provide it to the Cockpit process ?
>>>>> I know that in the CLI installation it could be provided as aparameter
>>>>> of the CLI command, but in Cockpit.
>>>>>
>>>>
>>>> You are right, unfortunately we have to fix it: I don't see any
>>>> possible workaround at cockpit level.
>>>>
>>>
>>> Now pushed a simple fix:
>>>
>>> https://gerrit.ovirt.org/85142
>>>
>>> You can try using the jenkins-generated RPMs:
>>>
>>> http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_maste
>>> r_check-patch-el7-x86_64/1067/artifact/exported-artifacts/
>>>
>>>
>>>>
>>>> Directly running hosted-engine-setup from CLI instead should work.
>>>>
>>>>
>>>>>
>>>>> 2017-12-06 12:25 GMT+01:00 Yedidyah Bar David <didi at redhat.com>:
>>>>>
>>>>>> On Wed, Dec 6, 2017 at 1:10 PM, Roberto Nunin <robnunin at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> This is the first request, note that do not allow to add CIDR:
>>>>>>> [image: Immagine incorporata 1]
>>>>>>>
>>>>>>> 2017-12-06 11:54 GMT+01:00 Roberto Nunin <robnunin at gmail.com>:
>>>>>>>
>>>>>>>> Yes, both times on Cockpit.
>>>>>>>>
>>>>>>>> 2017-12-06 11:43 GMT+01:00 Simone Tiraboschi <stirabos at redhat.com>:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin at gmail.com
>>>>>>>>> > wrote:
>>>>>>>>>
>>>>>>>>>> Ciao Simone
>>>>>>>>>> thanks for really quick answer.
>>>>>>>>>>
>>>>>>>>>> 2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos at redhat.com
>>>>>>>>>> >:
>>>>>>>>>>
>>>>>>>>>>> Ciao Roberto,
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin <
>>>>>>>>>>> robnunin at gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I'm having trouble to deploy one three host hyperconverged lab
>>>>>>>>>>>> using iso-image named above.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Please note that ovirt-node-ng-installer-ovirt-4.2-pre-
>>>>>>>>>>> 2017120512 <%28201%29%20712-0512>.iso is still a pre-release
>>>>>>>>>>> software.
>>>>>>>>>>> Your contribute testing it is really appreciated!
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> ​It's a pleasure !.​
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> My test environment is based on HPE BL680cG7 blade servers.
>>>>>>>>>>>> These servers has 6 physical 10GB network interfaces (flexNIC),
>>>>>>>>>>>> each one with four profiles (ethernet,FCoE,iSCSI,etc).
>>>>>>>>>>>>
>>>>>>>>>>>> I choose one of these six phys interfaces (enp5s0f0) and
>>>>>>>>>>>> assigned it a static IPv4 address, for each node.
>>>>>>>>>>>>
>>>>>>>>>>>> After node reboot, interface ONBOOT param is still set to no.
>>>>>>>>>>>> Changed via iLO interface to yes and restarted network. Fine.
>>>>>>>>>>>>
>>>>>>>>>>>> After gluster setup, with gdeploy script under Cockpit
>>>>>>>>>>>> interface, avoiding errors coming from :
>>>>>>>>>>>> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start
>>>>>>>>>>>> hosted-engine deploy.
>>>>>>>>>>>>
>>>>>>>>>>>> With the new version, I'm having an error never seen before:
>>>>>>>>>>>>
>>>>>>>>>>>> The Engine VM (10.114.60.117) and this host (10.114.60.134/24)
>>>>>>>>>>>> will not be in the same IP subnet. Static routing configuration are not
>>>>>>>>>>>> supported on automatic VM configuration.
>>>>>>>>>>>>  Failed to execute stage 'Environment customization': The
>>>>>>>>>>>> Engine VM (10.114.60.117) and this host (10.114.60.134/24)
>>>>>>>>>>>> will not be in the same IP subnet. Static routing configuration are not
>>>>>>>>>>>> supported on automatic VM configuration.
>>>>>>>>>>>>  Hosted Engine deployment failed.
>>>>>>>>>>>>
>>>>>>>>>>>> There's no input field for HE subnet mask. Anyway in our
>>>>>>>>>>>> class-c ovirt management network these ARE in the same subnet.
>>>>>>>>>>>> How to recover from this ? I cannot add /24 CIDR in HE Static
>>>>>>>>>>>> IP address field, it isn't allowed.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>> Incidentally, we also got now another similar report:
>>>>>>
>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1522712
>>>>>>
>>>>>> Perhaps a new regression? Although I can't see where it happened.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>>>>> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24
>>>>>>>>>>> subnet so it should't fail.
>>>>>>>>>>> The issue here seams different:
>>>>>>>>>>>
>>>>>>>>>>> From hosted-engine-setup log I see that you passed the VM IP
>>>>>>>>>>> address via answerfile:
>>>>>>>>>>> 2017-12-06 09:14:30,195+0100 DEBUG otopi.context
>>>>>>>>>>> context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic
>>>>>>>>>>> CIDR=str:'10.114.60.117'
>>>>>>>>>>>
>>>>>>>>>>> while the right syntax should be:
>>>>>>>>>>> OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24
>>>>>>>>>>>
>>>>>>>>>>> Did you wrote the answerfile by yourself or did you entered the
>>>>>>>>>>> IP address in the cockpit wizard? if so we probably have a regression there.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> ​I've inserted it while providing data for setup, using Cockpit
>>>>>>>>>> interface. Tried to add CIDR (/24), but it isn't allowed from Cockpit web
>>>>>>>>>> interface.​ No manual update of answer file.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Moreover, VM FQDN is asked two times during the deploy process.
>>>>>>>>>>>> It's correct ?
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> No, I don't think so but I don't see it from you logs.
>>>>>>>>>>> Could you please explain it?
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> ​Yes: first time is requested during initial setup of HE VM deploy
>>>>>>>>>>
>>>>>>>>>> The second one, instead, is asked (at least to me ) in this step,
>>>>>>>>>> after initial setup:
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> So both on cockpit side?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> [image: Immagine incorporata 1]​
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Some additional, general questions:
>>>>>>>>>>>> NetworkManager: must be disabled deploying HCI solution ? In my
>>>>>>>>>>>> attempt, wasn't disabled.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>> ​Simone, could you confirm or not that NM must stay in place
>>>>>>>>>> while deploying ? This qustion was struggling since 3.6...... what is the
>>>>>>>>>> "best practice" ?
>>>>>>>>>> All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it
>>>>>>>>>> disabled, but I wasn't able to find any mandatory rule.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> In early 3.6 you had to disable it but now you can safely keep it
>>>>>>>>> on.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>> There's some document to follow to perform a correct deploy ?
>>>>>>>>>>>> Is this one still "valid" ? : https://ovirt.org/blog/2017/
>>>>>>>>>>>> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/
>>>>>>>>>>>>
>>>>>>>>>>>> Attached hosted-engine-setup log.
>>>>>>>>>>>> TIA
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Roberto
>>>>>>>>>>>> 110-006-970​
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Users mailing list
>>>>>>>>>>>> Users at ovirt.org
>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Roberto Nunin
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Roberto Nunin
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Roberto Nunin
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users at ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Didi
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Roberto Nunin
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Didi
>>>
>>
>>
>>
>> --
>> Roberto Nunin
>>
>>
>>
>>
>


-- 
Roberto Nunin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171206/07fd32e2/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 55033 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171206/07fd32e2/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 60434 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171206/07fd32e2/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 61476 bytes
Desc: not available
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171206/07fd32e2/attachment-0002.png>


More information about the Users mailing list