Hi Peter,
Sorry for being late to the party. I hope you and William are working on
the same deployment so I'll be responding to both of you.
On 18/01/14 23:56, Peter Styk wrote:
On 18 January 2014 17:11, Itamar Heim <iheim(a)redhat.com
<mailto:iheim@redhat.com>> wrote:
On 01/18/2014 04:52 PM, Peter Styk wrote:
So I got ovirtmgmt VM ticked off. Had to remove it from all VM's
then
tried to add new Logical network to eth0 by drag & drop but refused
since ovirtmgmt was out of sync. So I synced it. ....and that's
how I
lost access to my hosted remote system. End of story. So that's it I
guess, automated install doesn't work (during switch network
goes dead
and doesn't come back remote access is lost) so I found manual
way that
works. But then I need to sync it to get my ovirtmgmt not to be a VM
network. And even if its not to get another network on the
interface i
need to sync it.....
So basically what you're saying is that switching the management network
to non-VM while it is provisioned on hosts, then trying to synchronize
it, leads to loss of connection? We'll need to have a look.
this sounds like a bug - can you provide clear reproduction steps?
Reproduction steps are in network configuration alone. After installing
Engine, i have to set up network to the config listed
here:
http://styk.tv/wp-content/uploads/2014/01/oVirtHosted1_almost_working.png.
Unfortunately can't rely on engine-vdsm duo to help out.
Anything above the host line on the diagram is physical setup and is the
only configuration that doesn't disconnect me from the net. Only thing
missing on the diagram is ifcfg-eth0 has also HWADDR attribute with MAC
address of physical eth0 device.
Once this survives service network restart I can proceed to VDSM install
and then its straight forward....that is until I'm trying to SYNC
ovirtmgmt on host inside. I should mention after ovirt engine is working
I destroy default cluster and create new local one.
Obviously I cannot access logs since access to host is no longer but
entire setup is scripted including provisioning so I can easily rebuild
entire setup within 15 or so minutes by running a script. Anyone who
would like to benefit from my findings can use this script and gain
access to my host and learn with me on how to overcome this.
From the above information I understand that you are still in the
initial stages of installation and are not scared of starting anew. If
that is the case, I would suggest that you do start from a clean DC again.
Try to configure the management network as non-VM prior to adding any
host, then add the hosts. As far as I know, this should provision the
network correctly on the hosts to begin with (it should work when
synchronizing it as well, but let's try this way and see if it works
better).
Script will provision the host (fresh os install), log in, get the
files: ifcfg-eth0, ifcfg-ovirtmgmt, ifcfg-ovirtmgmt-range0,
ovrit_answers and route-ovirtmgmt, then install epel 6-8, install pgp,
localinstall ovirt-el6.10-1. install bridge-utils, upgrade, set
hostname, then after reboot ssh alive, set local data,images,iso
folders, install ovrit-engine, set ipv4 forwarding and proxy_arp=1,
restart-network, and run engine-setup with ovirt_answers including cli
and stop iptables as engine and vdsm rules still prevent connection if
on. that's it. working system in 15 mins
still to do is engine-api calls to create local cluster, join engine
with vdsm local and setup private network with pfSense instance as
router/nat/dhcp for 10.0.0.0/24 <
http://10.0.0.0/24>
Peter