[ovirt-users] management network VLAN mismatch
Garry Tiedemann
garrytiedemann at networkvideo.com.au
Wed Feb 24 02:03:16 EST 2016
Hi folks,
I'm in the process of upgrading to 3.6, and have encountered a new error
message. I understand the error, and how to fix it.
My question is, can fixing it affect production VM traffic? I don't
think so, but I'd appreciate confirmation from someone who really knows,
please.
Description of what I am doing:
I've upgraded hosted-engine, and am now upgrading the first hypervisor node.
My method, which may be outdated, is: have no VMs running on the host, I
drop it out of the cluster, do yum updates & reboots, then add the host
back to the cluster when it's ready.
When I add the host to the cluster, I get this error message:
Failed to configure management network on host bl05. Host bl05 has an
interface eth0.20 for the management network configuration with VLAN-ID
(20), which is different from data-center definition (none).
I know what this means, and will explain it for the benefit of others.
It means that:
* the ovirtmgmt bridge on the host is attached to a VLAN 20
subinterface. That is by design; it is correct.
* However, in Data Center > Networks definition of ovirtmgmt, there is
no VLAN tag defined, so my hosted-engine believes that ovirtmgmt should
go into in VLAN 1.
Also, for completeness:
* there is PVID 20 on the adjacent switchport, which means "untagged
ingress traffic should be tagged into VLAN 20". This is why my ovirtmgmt
network has been working.
* I am grateful for the error message now appearing, as it shows me
where I have misconfigured this a long time ago.
So. To fix this, I need to add VLAN 20 tag in ovirtmgmt in Data Center >
Networks.
My question: If I add the VLAN 20 tag to ovirtmgmt in Data Center >
Networks, is there any possibility of breaking traffic (or VMs) on other
(production) VLANs?
From my general knowledge, I cannot see that would be the case, but I
would be grateful for an informed opinion from someone who has tried it
before please.
Best regards to all,
Garry
*
*
More information about the Users
mailing list