[ovirt-users] Single server hosted engine... almost there

Yedidyah Bar David didi at redhat.com
Sun Dec 11 06:49:18 UTC 2016


On Thu, Dec 8, 2016 at 5:24 PM, Mark Steckel <mjs at fix.net> wrote:
> [Apologize. Accidentally hit send instead of save. Continuing below...]
>
>> ----- Yedidyah Bar David <didi at redhat.com> wrote:
>> > On Thu, Dec 8, 2016 at 12:42 AM, Mark Steckel <mjs at fix.net> wrote:
>> > > Hi,
>> > >
>> > > OK, I reset things and tried again but was more more careful regarding the DNS setup which I believe was correct this. In other words, the FQDNs were resolved from both the host and the HE VM.
>> > >
>> > > After the latest failure I execute 'ip address' to see the state of the interfaces. And lo and behold the /29 IP I had on eth0:1 no longer exists.
>> > >
>> > > So some context.
>> > >
>> > > The server's primary IP is a /24 with the gw being the x.y.z.1.
>> > >
>> > > I have have a /29 subnet to use for the VMs.
>> > >
>> > > I have been presuming that I place the a.b.c.1/29 on eth0:1 for the subnet's gw and OVirt will ether keep it in place or migrate it to the ovirtmgmt device. Instead it is deleted during "hosted-engine --deploy".(Note, when the .1/29 is assigned to eth0:1, the IP address is reachable from the the Internet.)
>> > >
>> > > Dnsmasq is configured to a) serve a.b.c.2/29 a.b.c.6/29 via DHCP and b) to resolve unique FQDNs for each IP. The he VM set to receive the a.b.c.2/29 address.
>> > >
>> > > Am I missing and or just misunderstanding something here?
>> >
>> > "eth0:1" is not really a different interface.
>> >
>> > Part of the deploy process is to take the interface you have chosen,
>> > create a new bridge, copy part of the configuration from the nic to
>> > the bridge, and add the nic to the bridge. This is one of the most
>> > delicate parts of the process, the one that if fails might leave you
>> > with no network access, the one due to which we recommend to run this
>> > inside 'screen'. You can't do this to "eth0" and keep "eth0:1"
>> > untouched. You need either a vlan interface or a separate physical
>> > nic. If you feel like this, please open a bug to make 'hosted-engine
>> > --deploy' notice and prevent what you tried to do. Currently it does
>> > not check IP aliases.
>
> I was creating the /29 gw IP on eth0:1 because it seems the simplest thing to do. There is no requirement for it to hang off of eth0.
>
> Given that I have to hang the entire /29 subnet on the host (and VMs), and I am presuming that the gw IP of the /29 must be on the host, do you have a suggestion of how to configure this? (And to be explicit about it, do I need the /29 gw IP on the host to ensure the vm networking operates?)

Not sure, but even if you do, I think you can do that after --deploy finishes.
Can you please detail (again, perhaps) your intention/plan? How many NICs you
have, can you use VLANs, etc.? Also, if if it's a single host that will remain
single, you might find it simpler to use virt-manager. oVirt is intended for
managing larger setups.

>
> Without the vm engine logs it is difficult to determine why the engine vm fails when resolving its fqdn. At this point I'm presuming it's due to a networking/routing issue, but am open to suggestions.
>
>
>> > Another point - the script that failed is 'engine-setup'. This one
>> > runs inside the engine vm, and keeps its logs in
>> > /var/log/ovirt-engine/setup. If it fails again, please check/post also
>> > these, if at all possible (that is, if you can access the vm).
>> > Thinking about this, it might be possible for 'hosted-engine --deploy'
>> > to get this log, perhaps through the virtual serial connection it
>> > opens to show you the output, and save it on the host side, for easier
>> > debugging. Please open a bug for this too :-)
>>
>
> When the engine vm setup fails I am unable to connect to it via "hosted-engine --console". Should console access to the engine vm exist at this point? If so, what is the best way to access the engine vm console?
>
> The lack access to the engine vm logs is very painful for trying to diagnose what is going wrong. Ideas welcomed!
>

If the vm is still up, you can see its qemu process on the host
with 'ps'.

I detailed most of what I know about accessing the console in:

http://www.ovirt.org/documentation/admin-guide/hosted-engine-console/

Please note that in recent versions, '--console' connects you to the
serial console, not graphical one. We did this so that we do not
need to enforce installing a graphical env on mostly-headless hosts.

Best,
-- 
Didi



More information about the Users mailing list