[ovirt-users] Self-Hosted engine 4.1.7 unavailable from the outside of the host

Matteo Capuano kapu.net at gmail.com
Mon Dec 18 11:29:57 UTC 2017


On Mon, Dec 18, 2017 at 11:19 AM, Simone Tiraboschi <stirabos at redhat.com>
wrote:

>
>
> On Mon, Dec 18, 2017 at 11:09 AM, Matteo Capuano <kapu.net at gmail.com>
> wrote:
>
>>
>>
>> On Mon, Dec 18, 2017 at 8:55 AM, Simone Tiraboschi <stirabos at redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Sat, Dec 16, 2017 at 1:29 PM, Matteo Capuano <kapu.net at gmail.com>
>>> wrote:
>>>
>>>> Hi everyone, my name’s Matteo and I’m a new oVirt user.
>>>>
>>>> I’m trying to install the gluster hyperconverged solution in a lab
>>>> environment following the How-To wrote by Jason Brooks:
>>>>
>>>> https://ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1
>>>> -and-gluster-storage/
>>>>
>>>> Sadly, I cannot understand how to configure the network on the
>>>> self-hosted engine and I’m unable to make it available from outside the
>>>> host.
>>>>
>>>> As written in the HowTo I created three hosts (ovirt-note 4.1.7) each
>>>> one with two nics, one for gluster and one for management, the network is a
>>>> static LAN with FQDN resolvable (also reverse) by a local DNS. Here the
>>>> details:
>>>>
>>>> Gateway: 172.16.1.1
>>>>
>>>> DNS: 172.16.1.12
>>>>
>>>> Host1: 172.16.1.210 (management) – 172.16.2.210 (gluster)
>>>>
>>>> Host2: 172.16.1.220 (management) – 172.16.2.220 (gluster)
>>>>
>>>> Host3: 172.16.1.230 (management) – 172.16.2.230 (gluster)
>>>>
>>>> Engine: 172.16.1.200
>>>>
>>>> When installing the engine I choose to brigde the management’s nic
>>>> (172.16.1.210) of host1 but, once the installation is completed, I'm unable
>>>> to reach the engine from the LAN where the hosts are connected. The engine
>>>> (172.16.1.200) can ping only host1 (172.16.1.210) and only host1 can ping
>>>> the engine.
>>>>
>>>> As far as my network knowledge goes, to make the engine available from
>>>> the outside of host1 I would need to use a third nic or to use some device
>>>> to associate the ip and MAC address of the engine’s nic.
>>>>
>>> Ciao Matteo,
>>> no, hosted-engine-setup should create a bridge for you.
>>> No need to do custom configuration to expose your VMs.
>>>
>>> Are you trying on bare metal or on VMs with the engine VM as a nested VM?
>>>
>>
>> Ciao Simone,
>>
>> thank you for your answer.
>>
>> My lab is a nested environmet. I installed ovirt on bare metal with the
>> engine on another machine. On this setup i have 6 VMs:
>>
>> - 172.16.1.1  pfSense as firewall/gateway
>> - 172.16.1.10  a windows 2016 used as network guest
>> - 172.16.1.12  a nethserver installation as DNS server ( also on
>> 172.16.2.12 )
>> - the three ovirt-nodes as i described on my e-mail
>>
>> All the machines are pingable, only the engine inside host 1 is
>> unreachable and can ping only host1.
>>
>
> OK, your issue is caused by vdsm-no-mac-spoofing filter on your L1 VMs.
> Please create a custom vNic profile on your external oVirt engine setting
> the "network filter" field to "no network filter".
> Now you can edit your ovirt-node VMs setting the network profile of the
> nics you are going to use for the management bridge to the profile with "no
> network filter".
> You have to shutdown and restart your ovirt-node VMs and then you could
> retry the deployment.
>


Thank you for the explanation, I'm going to try it in the evening. I'll
keep you posted.

Cheers


>
>
>
>>
>>
>> Cheers
>>
>> Matteo
>>
>>
>>>
>>>> I’ve looked around over the internet for a solution but every how-to
>>>> I’ve found follows the same steps of Jason’s.  I’ve also already asked
>>>> for help to Jason.
>>>>
>>>> Anyone could help me to solve this issue?
>>>>
>>>>
>>>>
>>>> Thank you
>>>>
>>>>
>>>>
>>>> Matteo
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171218/36f486b0/attachment.html>


More information about the Users mailing list