Hi Tom,
It turns out that the issue was setting a default aka native VLAN on the
switch port. The switch is an Arista. After I'd tried re-creating the
networks on the oVirt side, I thought I'd just try removing the default
VLAN, and it worked. For whatever reason, our VMWare cluster (that I'm
trying to migrate away from) has this setting, as do our blade centres, and
they have no problem. I'm not sure why you'd want a default Anyway, thanks
for the reply - all sorted now!
Cheers,
Cam
On Thu, Sep 15, 2016 at 1:07 PM, Tom Gamull <tgamull(a)redhat.com> wrote:
Can you eliminate the switch or port config as the issue? I’m a
little
unclear as to how you configured the nodes I have basically use a single
NIC (some are LACP bonds) with a GENERAL link type (not TRUNK or ACCESS)
where there is a default VLAN (which I don’t tag) and the rest of the VLANs
are tagged. On my switch (TP-LINK) I have to go to the VLANs and say TAG
on all of them except the DEFAULT one. I couldn’t get TRUNK working even
with one untagged and the rest tagged. For me, using GENERAL was the way I
did it.
I’m not a networking except on hardware but this discussion may clarify.
I have no idea why TRUNK didn’t work over GENERAL and didn’t spend much
more time than that but here’s a discussion on the topic
https://supportforums.cisco.com/discussion/11897946/general-vs-trunk-mode
Tom
On Sep 14, 2016, at 3:08 PM, cmc <iucounu(a)gmail.com> wrote:
Hi,
I have modified my VM network to have multiple tagged networks. It used to
be an untagged network and it worked fine, but I needed to add more
networks on the host. The switch it is connected to has the port configured
as a trunk port with these VLANs. When a VM sends traffic out (to get a
DHCP address for instance), it reaches the server but does not get the DHCP
offer that the server sends. I did a tcpdump on the node that hosts the VM
and the packets going out do not have a VLAN tag. I assume the VM host
interface would should not have the tag present, but that the p2p1.91
interface should put the VLAN tag on.
The relevant interface configuration on the node that the VM host is on
looks like:
27: sohonet_DMZ: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP
link/ether a0:36:9f:2a:63:20 brd ff:ff:ff:ff:ff:ff
29: p2p1.91@p2p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue master vmnet state UP
link/ether a0:36:9f:2a:63:20 brd ff:ff:ff:ff:ff:ff
30: vmnet: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP
link/ether a0:36:9f:2a:63:20 brd ff:ff:ff:ff:ff:ff
inet6 fe80::a236:9fff:fe2a:6320/64 scope link
valid_lft forever preferred_lft forever
31: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master vmnet state UNKNOWN qlen 500
link/ether fe:1a:4a:16:01:58 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc1a:4aff:fe16:158/64 scope link
valid_lft forever preferred_lft forever
32: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master vmnet state UNKNOWN qlen 500
link/ether fe:1a:4a:16:01:5c brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc1a:4aff:fe16:15c/64 scope link
valid_lft forever preferred_lft forever
where vnet1 is the VM interface, and p2p1.91 is the VLAN'd interface for
the network. The network the VM nic is on is vmnet, and the physical
interface is p2p1
Configuring an address manually does not help, nor does dropping the
firewall on the host.
I've run the vmnet interface in promiscuous mode to see if I can see
anything coming back, but the return traffic does not appear.
Any ideas as to why the network is not working?
Thanks for any help.
-Cam
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users