On Tue, Apr 11, 2017 at 4:46 PM, Alan Cowles <alan.cowles(a)gmail.com> wrote:
Hey Dan,
Here is a layout of my environment setup, and the primary issue I have run
into, hopefully it helps to clear up the confusion.
Configure 2 hosts identically, eth0, eth1, eth0.502 (mgmt), eth1.504 (NFS)
and do the self-hosted engine install on one host.
As part of the install you have to identify the network that rhevm needs
to go on, and the network you need to mount your first NFS storage domain
from. In this case, eth0.502, and eth1.504 respectively.
That host, and the rhev-m hosted engine come up, however if you notice,
under Networks for the default datacenter, only rhevm exists as an actual
network.
You need to create a new network for NFS, in order to mount ISO/Data
storage domains, even though the engine setup has already mounted NFS via
eth1.504.
When you go to assign this new network from the engine, you cannot place
it on the eth1.504 only directly on eth0 or eth1 iteself.
Thus I have to be sure to tag that network in the rhev-m engine
When the tagged NFS network is placed on eth1, it looks like it breaks the
already existing NFS mount that supports the hosted engine, and causes
items to be non-responsive.
Rebooting the host at this stage, items still don't come up correctly and
the hosted engine remains down.
If I console to it, and manually setup the eth0 & eth1 interfaces,
eth0.502 & eth1.504 VLANS, and rhevm & NFS bridges, and reboot the host,
the host and the engine come up wonderfully with the defined networks vlan
tagged, and placed on the appropriate tag interfaces.
I then go to deploy a 2nd host as an additional hosted engine, and find
that I can select eth0.502 for rhevm and eth1.504 for nfs during the deploy
stages. But when it gets to the stage where it requires to you define the
networks that exist in the current cluster in order to activate the host
and proceed, I'm stuck in the same spot with applying networks, I can only
place them on the eth0/eth1 interfaces. I select ignore to exit the hosted
engine deployment wizard, and attempt to manually apply them, hoping to
repeat the steps from node 1 but was finding myself in a pickle because
starting VDSM would overwrite the network configs I had defined manually.
Why it does this on one host, and not on the other still perplexes me.
What I ended up doing once my primary host was rebuilt using the
appropriate bridges and vlan tagged interfaces, was reinstalling my 2nd
host completely, and configuring it as a self-hosted engine additional
host. This time it imports the network config from the first host
completely here, and I wind up with all tagged interfaces working correctly
and VDSM running as designed.
I guess the thing that bothered me mainly is the functionality in
assigning the networks in rhev manager, as it shows the vlans as
sub-interfaces of eth0/eth1, but doesn't let you assign networks to them,
and the odd behavior of VDSM overwriting configs on one host, but not the
other?
I'll admit the setup I have is convoluted, but it's what I have to work
with for this project.
Thank you very much for the time and advice thus far.
In general, when VDSM acquires an interface, it marks its ifcfg file and
some logic exists around that. I'm not sure if this is the case here, but
it may explain what you saw.
If the ifcfg file is now identified as 'acquired' it will be overwritten
with the config VDSM supports.
Networks are attached to nics or bonds, the way to assign a network to a
specific VLAN is to create a VLAN network (mentioning the TAG) as you have
done. The logic behind this is to allow the user to create multiple
networks on the same nic/bond as long as they do not collide (in terms of
vlan ids), letting oVirt handle the vlan interface creation behind the
scene.
When you create a network, even if its infrastructure already partially
exists (the vlan iface), it will tear down the configuration and create it
again.
This is especially true when originally there was no bridge and now one
needs to be added.
You could try to create a non-VM network (remove the VM check-box), which
does not add a bridge, perhaps that will help.
Please let us know if you still see this problem with 4.0 or 4.1.
Thanks,
Edy.
On Sun, Apr 9, 2017 at 4:33 AM, Dan Kenigsberg <danken(a)redhat.com> wrote:
> On Fri, Apr 7, 2017 at 4:24 PM, Alan Cowles <alan.cowles(a)gmail.com>
> wrote:
> > Hey guys,
> >
> > I'm in a lab setup currently with 2 hosts, running RHEV-3.5, with a
> > self-hosted engine on RHEL 6.9 servers. I am doing this in order to
> plot out
> > a production upgrade I am planning going forward to 4.0, and I'm a bit
> stuck
> > and I'm hoping it's ok to ask questions here concerning this product
and
> > version.
> >
> > In my lab, I have many vlans trunked on my switchports, so I have to
> create
> > individual vlan interfaces on my RHEL install. During the install, I am
> able
> > to pick my ifcfg-eth0.502 interface for rhevm, and ifcfg-eth1.504
> interface
> > for NFS, access the storage and create my self-hosted engine. The issue
> I am
> > running into is that I get into RHEV-M, and I am continuing to set the
> hosts
> > up or add other hosts, when I go to move my NFS network to host2 it only
> > allows me to select the base eth1 adapter, and not the VLAN tagged
> version.
> > I am able to tag the VLAN in the RHEV-M configured network itself, but
> this
> > has the unfortunate side effect of tagging a network on top of the
> already
> > tagged interface on host1, taking down NFS and the self hosted engine.
> >
> > I am able to access the console of host1, and I configure the ifcfg
> files,
> > vlan files, and bridge files to be on the correct interfaces, and I get
> my
> > host back up, and my RHEV-M back up. However when I try to make these
> manual
> > changes to host2 and get it up, the changes to these files are
> completely
> > overwritten the moment the host reboots connected to vdsmd start-up.
>
> If that was your only issue, I would have recommended you to read
>
https://www.ovirt.org/blog/2016/05/modify-ifcfg-files/ and implement a
> hook that would leave the configuration as you wanted it.
>
>
> >
> > Right now, I have vdsmd disabled, and I have host2 configured the way I
> need
> > it to be with the rhevm bridge on eth0.502, the NFS bridge on eth1.504,
> and
> > my VMNet "guest" bridge on eth1.500, however that leaves me with a
> useless
> > host from RHEV standards.
> >
> > I've checked several different conf files to see where vdsmd is pulling
> it's
> > configuration from but I can't find it, or find a way to modify it to
> fit my
> > needs.
> >
> > Any advice or pointers here would be greatly appreciated. Thank you all
> in
> > advance.
>
> Pardon me for not clearly understanding the problem at hand.
>
> Could you specify your Engine-defined network names and vlan IDs? Can
> you specify the ifcfgs that you'd like to see on your hosts, and the
> ones re-generated on reboot?
>
> Dan.
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users