[ovirt-users] CentOS 7 + oVirt 3.5 + OpenVPN

Darrell Budic budic at onholyground.com
Sun Oct 19 03:04:31 UTC 2014


Looks like an openvpn config issue and not a ovirt issue from this. 192.168.124.1 is not in the same network as 192.168.124.200/25 mostly, try 192.168.124.129.

> On Oct 18, 2014, at 7:45 AM, Phil Daws <uxbod at splatnix.net> wrote:
> 
> Hello:
> 
> have installed oVirt 3.5 VDSM on a CentOS 7 node and then OpenVPN.  The problem I have is that when I start OpenVPN I receive the message:
> 
> Oct 18 13:29:50 kvm01 openvpn[4159]: /usr/sbin/ip link set dev tun0 up mtu 1500
> Oct 18 13:29:50 kvm01 openvpn[4159]: /usr/sbin/ip addr add dev tun0 192.168.124.200/25 broadcast 192.168.124.255
> Oct 18 13:29:50 kvm01 openvpn[4159]: /usr/sbin/ip route add 192.168.0.0/16 via 192.168.124.1
> Oct 18 13:29:50 kvm01 openvpn[4159]: ERROR: Linux route add command failed: external program exited with error status: 2
> 
> and if I run the route command manually:
> 
> [root at kvm01 sysconfig]# /usr/sbin/ip route add 192.168.0.0/16 via 192.168.124.1
> RTNETLINK answers: No such process
> 
> It would appear the tunnel is up:
> 
> [root at kvm01 sysconfig]# ip add ls
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
>    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>    inet 127.0.0.1/8 scope host lo
>       valid_lft forever preferred_lft forever
>    inet6 ::1/128 scope host 
>       valid_lft forever preferred_lft forever
> 2: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN 
>    link/ether f2:c9:ce:e5:ac:32 brd ff:ff:ff:ff:ff:ff
> 3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovirtmgmt state UP qlen 1000
>    link/ether c8:1f:66:c4:2c:76 brd ff:ff:ff:ff:ff:ff
>    inet6 fe80::ca1f:66ff:fec4:2c76/64 scope link 
>       valid_lft forever preferred_lft forever
> 4: em2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
>    link/ether c8:1f:66:c4:2c:77 brd ff:ff:ff:ff:ff:ff
> 6: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
>    link/ether 46:af:6e:9a:1e:4b brd ff:ff:ff:ff:ff:ff
> 8: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
>    link/ether c8:1f:66:c4:2c:76 brd ff:ff:ff:ff:ff:ff
>    inet XXX.XXX.XXX.XXX/23 brd 88.150.253.255 scope global ovirtmgmt
>       valid_lft forever preferred_lft forever
>    inet6 fe80::ca1f:66ff:fec4:2c76/64 scope link 
>       valid_lft forever preferred_lft forever
> 10: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100
>    link/none 
>    inet 192.168.124.200/25 brd 192.168.124.255 scope global tun0
>       valid_lft forever preferred_lft forever
> 
> Any thoughts as to why the route will not work ? Rationale for this approach is its a cloud server and wish to use a private network to reach the install VMs on that node.
> 
> Thanks, Phil
> 
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users




More information about the Users mailing list