<div dir="ltr">Hey Dan,<div><br></div><div>Here is a layout of my environment setup, and the primary issue I have run into, hopefully it helps to clear up the confusion.</div><div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">Configure 2 hosts identically, eth0, eth1, eth0.502 (mgmt), eth1.504 (NFS) and do the self-hosted engine install on one host. </div><div style="font-size:12.8px">As part of the install you have to identify the network that rhevm needs to go on, and the network you need to mount your first NFS storage domain from. In this case, eth0.502, and eth1.504 respectively.</div><div style="font-size:12.8px">That host, and the rhev-m hosted engine come up, however if you notice, under Networks for the default datacenter, only rhevm exists as an actual network. </div><div style="font-size:12.8px">You need to create a new network for NFS, in order to mount ISO/Data storage domains, even though the engine setup has already mounted NFS via eth1.504.</div><div style="font-size:12.8px">When you go to assign this new network from the engine, you cannot place it on the eth1.504 only directly on eth0 or eth1 iteself.</div><div style="font-size:12.8px"><span style="font-size:12.8px">Thus I have to be sure to tag that network in the rhev-m engine</span></div><div style="font-size:12.8px"><span style="font-size:12.8px">When the tagged NFS network is placed on eth1, it looks like it breaks the already existing NFS mount that supports the hosted engine, and causes items to be non-responsive.</span></div><div style="font-size:12.8px">Rebooting the host at this stage, items still don't come up correctly and the hosted engine remains down.</div><div style="font-size:12.8px"><span style="font-size:12.8px">If I console to it, and manually setup the eth0 & eth1 interfaces, eth0.502 & eth1.504 VLANS, and rhevm & NFS bridges, and reboot the host, the host and the engine come up wonderfully with the defined networks vlan tagged, and placed on the appropriate tag interfaces.</span></div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">I then go to deploy a 2nd host as an additional hosted engine, and find that I can select eth0.502 for rhevm and eth1.504 for nfs during the deploy stages. But when it gets to the stage where it requires to you define the networks that exist in the current cluster in order to activate the host and proceed, I'm stuck in the same spot with applying networks, I can only place them on the eth0/eth1 interfaces. I select ignore to exit the hosted engine deployment wizard, and attempt to manually apply them, hoping to repeat the steps from node 1 but was finding myself in a pickle because starting VDSM would overwrite the network configs I had defined manually. Why it does this on one host, and not on the other still perplexes me.</div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">What I ended up doing once my primary host was rebuilt using the appropriate bridges and vlan tagged interfaces, was reinstalling my 2nd host completely, and configuring it as a self-hosted engine additional host. This time it imports the network config from the first host completely here, and I wind up with all tagged interfaces working correctly and VDSM running as designed.</div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">I guess the thing that bothered me mainly is the functionality in assigning the networks in rhev manager, as it shows the vlans as sub-interfaces of eth0/eth1, but doesn't let you assign networks to them, and the odd behavior of VDSM overwriting configs on one host, but not the other?</div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">I'll admit the setup I have is convoluted, but it's what I have to work with for this project.</div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">Thank you very much for the time and advice thus far.</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Apr 9, 2017 at 4:33 AM, Dan Kenigsberg <span dir="ltr"><<a href="mailto:danken@redhat.com" target="_blank">danken@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Fri, Apr 7, 2017 at 4:24 PM, Alan Cowles <<a href="mailto:alan.cowles@gmail.com">alan.cowles@gmail.com</a>> wrote:<br>
> Hey guys,<br>
><br>
> I'm in a lab setup currently with 2 hosts, running RHEV-3.5, with a<br>
> self-hosted engine on RHEL 6.9 servers. I am doing this in order to plot out<br>
> a production upgrade I am planning going forward to 4.0, and I'm a bit stuck<br>
> and I'm hoping it's ok to ask questions here concerning this product and<br>
> version.<br>
><br>
> In my lab, I have many vlans trunked on my switchports, so I have to create<br>
> individual vlan interfaces on my RHEL install. During the install, I am able<br>
> to pick my ifcfg-eth0.502 interface for rhevm, and ifcfg-eth1.504 interface<br>
> for NFS, access the storage and create my self-hosted engine. The issue I am<br>
> running into is that I get into RHEV-M, and I am continuing to set the hosts<br>
> up or add other hosts, when I go to move my NFS network to host2 it only<br>
> allows me to select the base eth1 adapter, and not the VLAN tagged version.<br>
> I am able to tag the VLAN in the RHEV-M configured network itself, but this<br>
> has the unfortunate side effect of tagging a network on top of the already<br>
> tagged interface on host1, taking down NFS and the self hosted engine.<br>
><br>
> I am able to access the console of host1, and I configure the ifcfg files,<br>
> vlan files, and bridge files to be on the correct interfaces, and I get my<br>
> host back up, and my RHEV-M back up. However when I try to make these manual<br>
> changes to host2 and get it up, the changes to these files are completely<br>
> overwritten the moment the host reboots connected to vdsmd start-up.<br>
<br>
</span>If that was your only issue, I would have recommended you to read<br>
<a href="https://www.ovirt.org/blog/2016/05/modify-ifcfg-files/" rel="noreferrer" target="_blank">https://www.ovirt.org/blog/<wbr>2016/05/modify-ifcfg-files/</a> and implement a<br>
hook that would leave the configuration as you wanted it.<br>
<span class=""><br>
<br>
><br>
> Right now, I have vdsmd disabled, and I have host2 configured the way I need<br>
> it to be with the rhevm bridge on eth0.502, the NFS bridge on eth1.504, and<br>
> my VMNet "guest" bridge on eth1.500, however that leaves me with a useless<br>
> host from RHEV standards.<br>
><br>
> I've checked several different conf files to see where vdsmd is pulling it's<br>
> configuration from but I can't find it, or find a way to modify it to fit my<br>
> needs.<br>
><br>
> Any advice or pointers here would be greatly appreciated. Thank you all in<br>
> advance.<br>
<br>
</span>Pardon me for not clearly understanding the problem at hand.<br>
<br>
Could you specify your Engine-defined network names and vlan IDs? Can<br>
you specify the ifcfgs that you'd like to see on your hosts, and the<br>
ones re-generated on reboot?<br>
<span class="HOEnZb"><font color="#888888"><br>
Dan.<br>
</font></span></blockquote></div><br></div>