[ovirt-users] creating a vlan-tagged network

Edward Haas ehaas at redhat.com
Mon Jan 2 07:57:42 UTC 2017


On Sun, Jan 1, 2017 at 7:16 PM, Jim Kusznir <jim at palousetech.com> wrote:

> I pinged both the router on the subnet and a host IP in-between the two
> ip's.
>
> [root at ovirt3 ~]# ping -I 162.248.147.33 162.248.147.1
> PING 162.248.147.1 (162.248.147.1) from 162.248.147.33 : 56(84) bytes of
> data.
> 64 bytes from 162.248.147.1: icmp_seq=1 ttl=255 time=8.17 ms
> 64 bytes from 162.248.147.1: icmp_seq=2 ttl=255 time=7.47 ms
> 64 bytes from 162.248.147.1: icmp_seq=3 ttl=255 time=7.53 ms
> 64 bytes from 162.248.147.1: icmp_seq=4 ttl=255 time=8.42 ms
> ^C
> --- 162.248.147.1 ping statistics ---
> 4 packets transmitted, 4 received, 0% packet loss, time 3004ms
> rtt min/avg/max/mdev = 7.475/7.901/8.424/0.420 ms
> [root at ovirt3 ~]#
>
> The VM only has its public IP.
>
> --Jim
>

Very strange, all looks good to me.

I can try to help you debug using tcpdump, just send me the details for
remote connection on private.
It will also help if you join the vdsm or ovir IRC channels.


>
> On Jan 1, 2017 01:26, "Edward Haas" <ehaas at redhat.com> wrote:
>
>>
>>
>> On Sun, Jan 1, 2017 at 10:50 AM, Jim Kusznir <jim at palousetech.com> wrote:
>>
>>> I currently only have two IPs assigned to me...I can try and take
>>> another, but that may not route out of the rack.  I've got the VM on one of
>>> the IPs and the host on the other currently.
>>>
>>> The switch is a "web-managed" basic 8-port switch (thrown in for testing
>>> while the "real" switch is in transit).  It has the 3 ports the hosts are
>>> plugged in configured with vlan 1 untagged, set as PVID, and vlan 2
>>> tagged.  Another port on the switch is untagged on vlan 1 connected to the
>>> router for the ovirtmgmt network (protected by a VPN, but not "burning"
>>> public IPs for mgmt purposes), another couple ports are untagged on vlan
>>> 2.  One of those ports goes out of the rack, another goes to the router's
>>> internet port.  Router gets to the internet just fine.
>>>
>>> VM:
>>> kusznir at FusionPBX:~$ ip address
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>>> group default
>>>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>     inet 127.0.0.1/8 scope host lo
>>>        valid_lft forever preferred_lft forever
>>>     inet6 ::1/128 scope host
>>>        valid_lft forever preferred_lft forever
>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>>> state UP group default qlen 1000
>>>     link/ether 00:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
>>>     inet 162.248.147.31/24 brd 162.248.147.255 scope global eth0
>>>        valid_lft forever preferred_lft forever
>>>     inet6 fe80::21a:4aff:fe16:151/64 scope link
>>>        valid_lft forever preferred_lft forever
>>> kusznir at FusionPBX:~$ ip route
>>> default via 162.248.147.1 dev eth0
>>> 162.248.147.0/24 dev eth0  proto kernel  scope link  src 162.248.147.31
>>> kusznir at FusionPBX:~$
>>>
>>> Host:
>>> [root at ovirt3 ~]# ip address
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>>>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>     inet 127.0.0.1/8 scope host lo
>>>        valid_lft forever preferred_lft forever
>>>     inet6 ::1/128 scope host
>>>        valid_lft forever preferred_lft forever
>>> 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
>>> ovirtmgmt state UP qlen 1000
>>>     link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>>> 3: em2: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN qlen 1000
>>>     link/ether 00:21:9b:98:2f:46 brd ff:ff:ff:ff:ff:ff
>>> 4: em3: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN qlen 1000
>>>     link/ether 00:21:9b:98:2f:48 brd ff:ff:ff:ff:ff:ff
>>> 5: em4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state
>>> DOWN qlen 1000
>>>     link/ether 00:21:9b:98:2f:4a brd ff:ff:ff:ff:ff:ff
>>> 6: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>>     link/ether 8e:1b:51:60:87:55 brd ff:ff:ff:ff:ff:ff
>>> 7: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>>> state UP
>>>     link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>>>     inet 192.168.8.13/24 brd 192.168.8.255 scope global dynamic
>>> ovirtmgmt
>>>        valid_lft 54830sec preferred_lft 54830sec
>>> 11: em1.2 at em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>>> master Public_Cable state UP
>>>     link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>>> 12: Public_Cable: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
>>> noqueue state UP
>>>     link/ether 00:21:9b:98:2f:44 brd ff:ff:ff:ff:ff:ff
>>>     inet 162.248.147.33/24 brd 162.248.147.255 scope global Public_Cable
>>>        valid_lft forever preferred_lft forever
>>> 14: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>>> master ovirtmgmt state UNKNOWN qlen 500
>>>     link/ether fe:1a:4a:16:01:54 brd ff:ff:ff:ff:ff:ff
>>>     inet6 fe80::fc1a:4aff:fe16:154/64 scope link
>>>        valid_lft forever preferred_lft forever
>>> 15: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>>> master ovirtmgmt state UNKNOWN qlen 500
>>>     link/ether fe:1a:4a:16:01:52 brd ff:ff:ff:ff:ff:ff
>>>     inet6 fe80::fc1a:4aff:fe16:152/64 scope link
>>>        valid_lft forever preferred_lft forever
>>> 16: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>>> master ovirtmgmt state UNKNOWN qlen 500
>>>     link/ether fe:1a:4a:16:01:53 brd ff:ff:ff:ff:ff:ff
>>>     inet6 fe80::fc1a:4aff:fe16:153/64 scope link
>>>        valid_lft forever preferred_lft forever
>>> 17: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>>> master Public_Cable state UNKNOWN qlen 500
>>>     link/ether fe:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
>>>     inet6 fe80::fc1a:4aff:fe16:151/64 scope link
>>>        valid_lft forever preferred_lft forever
>>> [root at ovirt3 ~]# ip route
>>> default via 192.168.8.1 dev ovirtmgmt
>>> 162.248.147.0/24 dev Public_Cable  proto kernel  scope link  src
>>> 162.248.147.33
>>> 169.254.0.0/16 dev ovirtmgmt  scope link  metric 1007
>>> 169.254.0.0/16 dev Public_Cable  scope link  metric 1012
>>> 192.168.8.0/24 dev ovirtmgmt  proto kernel  scope link  src
>>> 192.168.8.13
>>> [root at ovirt3 ~]# brctl show
>>> bridge name bridge id STP enabled interfaces
>>> ;vdsmdummy; 8000.000000000000 no
>>> Public_Cable 8000.00219b982f44 no em1.2
>>> vnet3
>>> ovirtmgmt 8000.00219b982f44 no em1
>>> vnet0
>>> vnet1
>>> vnet2
>>> [root at ovirt3 ~]#
>>>
>>> I did see that the cluster settings has a switch type setting; currently
>>> at the default "LEGACY", it also has "OVS" as an option.  Not sure if that
>>> matters or not.
>>>
>>> I configured another VM on the network, and static'ed an IP, and could
>>> ping the other VM as well as the host, but not the internet.  The host can
>>> still ping the internet.
>>>
>>> --Jim
>>>
>>
>>
>> What address are you pinging the internet?
>> For the successful ping, can you use ping -I (capital i) to choose the
>> source address you exit the host with?
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170102/51db36af/attachment.html>


More information about the Users mailing list