Here is a layout of my environment setup, and the primary issue I have run into, hopefully it helps to clear up the confusion.
Configure 2 hosts identically, eth0, eth1, eth0.502 (mgmt), eth1.504 (NFS) and do the self-hosted engine install on one host.
As part of the install you have to identify the network that rhevm needs to go on, and the network you need to mount your first NFS storage domain from. In this case, eth0.502, and eth1.504 respectively.
That host, and the rhev-m hosted engine come up, however if you notice, under Networks for the default datacenter, only rhevm exists as an actual network.
You need to create a new network for NFS, in order to mount ISO/Data storage domains, even though the engine setup has already mounted NFS via eth1.504.
When you go to assign this new network from the engine, you cannot place it on the eth1.504 only directly on eth0 or eth1 iteself.
Thus I have to be sure to tag that network in the rhev-m engine
When the tagged NFS network is placed on eth1, it looks like it breaks the already existing NFS mount that supports the hosted engine, and causes items to be non-responsive.
Rebooting the host at this stage, items still don't come up correctly and the hosted engine remains down.
If I console to it, and manually setup the eth0 & eth1 interfaces, eth0.502 & eth1.504 VLANS, and rhevm & NFS bridges, and reboot the host, the host and the engine come up wonderfully with the defined networks vlan tagged, and placed on the appropriate tag interfaces.
I then go to deploy a 2nd host as an additional hosted engine, and find that I can select eth0.502 for rhevm and eth1.504 for nfs during the deploy stages. But when it gets to the stage where it requires to you define the networks that exist in the current cluster in order to activate the host and proceed, I'm stuck in the same spot with applying networks, I can only place them on the eth0/eth1 interfaces. I select ignore to exit the hosted engine deployment wizard, and attempt to manually apply them, hoping to repeat the steps from node 1 but was finding myself in a pickle because starting VDSM would overwrite the network configs I had defined manually. Why it does this on one host, and not on the other still perplexes me.
What I ended up doing once my primary host was rebuilt using the appropriate bridges and vlan tagged interfaces, was reinstalling my 2nd host completely, and configuring it as a self-hosted engine additional host. This time it imports the network config from the first host completely here, and I wind up with all tagged interfaces working correctly and VDSM running as designed.
I guess the thing that bothered me mainly is the functionality in assigning the networks in rhev manager, as it shows the vlans as sub-interfaces of eth0/eth1, but doesn't let you assign networks to them, and the odd behavior of VDSM overwriting configs on one host, but not the other?
I'll admit the setup I have is convoluted, but it's what I have to work with for this project.
Thank you very much for the time and advice thus far.