Netflix - Application based routing over another VPN

By Freeaqingme on Tuesday 17 September 2013 03:28 - Comments (4)
Categories: Linux, Networking, Views: 9.523

In my last blogpost I already showed how you could use a VPN connection for one application, while using your normal WAN for all other applications. Although I'm using Netflix as an example, you could use the same mechanism for virtually any other IP-based application.
The idea
For many people the suggested solution will suffice. However, as more and more people do these days, I proxy all my traffic through my (EU-based) VPS. This is perfectly doable with the setup I described in my last blogpost. But it becomes tedious to set up if you have multiple devices. Therefore, I want to do all my routing on my VPS, as outlined in this diagram:

                                               |----------|       |----------|    
                                               |          |       |          |    
                                        /------|   VPN    |------>| Netflix  | 
|----------|       |----------|        /       |          |       |          |
|          |       |          |-------/        |----------|       |----------|
| Desktop  |-----> |   VPS    | 
|          |       |          |-------\          /------^-----\
|----------|       |----------|        \        /              \
                                        \-------|   Internet   |
                                                 \             /
                                                  \-----------/
 


The connection between my Desktop and VPS is an OpenVPN tunnel. The connection between the VPS and VPN is also an OpenVPN tunnel provided by Private Internet Access (PIA). We cannot simply duplicate the exact same logic from my last blogpost in this situation, because it's based off the UID of the Netflix application, and in the current setup the routing is done on a different machine than where the Netflix application runs. In this particular example the advantage of routing to the VPN service from PIA are limited, but once you get multiple devices (laptop, mobile phone...) on the left side of the VPS, and start using multiple VPN services or locations, it starts to pay off.
IPIP
We need to find a solution to tell the VPS what gateway to take. It would of course be possible to set up another VPN connection between my Desktop and VPS but that would be tedious, especially once the number of clients and gateways increases. In my current VPN setup OpenVPN has been configured in routed mode (using TUN interfaces rather than TAP) and that means that we cannot use any layer-2 specific features (like abusing the QoS bit to indicate the desired gateway). In my case I decided to with an IPIP (Ip over Ip) tunnel. When an IP Packet is sent through an IPIP-tunnel, the only thing that happens is that an extra Source and Destination header is added to the IP Packet, as is illustrated in the Wireshark capture below (click to enlarge):
IPIP wireshark example
Setting it up
In the steps outlined below I'll assume you've already set up a VPN connection between the Desktop and the VPS. If you have not, you could take a look at this blogpost. On both the client and server the interfaces are called tun0. Furthermore, I'll assume you've already configured a VPN on the VPS towards the US, called tunUS, as described in my last post.
Creating the IPIP-tunnel
Because the IPIP-tunnel can only be initialized once the OpenVPN connection between the Desktop (Client) and VPS (Server) has been established, I put the logic of creating this IPIP-tunnel in a script that would be called by OpenVPN once the connection is initialized (using the OpenVPN 'up' directive).

On the client side:

code:
1
2
3
4
5
6
7
8
9
10
11
#!/bin/bash

# make sure all routes have been initialized
sleep 5

ip tunnel add tunUS-in-tun0 mode ipip remote 172.31.254.1 local 172.31.254.10
ip address add 172.31.252.2/30 dev tunUS-in-tun0
ip link set tunUS-in-tun0 up

ip route add default via 172.31.252.1 src 172.31.252.2 table 2
ip route add 172.31.252.0/30 dev tunUS  proto kernel  scope link  src 172.31.252.2 table 2



The 172.31.254.1 and .10 addresses are the OpenVPN gateway and client ip. I'd have preferred to use the variables that OpenVPN passes on to this script, but for some reason the gateway ip that's supplied is wrong (or technically right, but cannot be used for this purpose). On the server side we have a script that is almost the same:

code:
1
2
3
4
5
6
7
8
9
10
#!/bin/bash

# make sure all routes have been initialized
sleep 5

ip tunnel add tunUS-in-tun0 mode ipip remote 172.31.254.10 local 172.31.254.1
ip address add 172.31.252.1/30 dev tunUS-in-tun0
ip link set tunUS-in-tun0 up

ip route add 172.31.252.0/30 dev tunUS-in-tun0  proto kernel  scope link  src 172.31.252.1 table 2

Directing traffic on the client
Next up, we need to do direct the Netflix traffic from the Netflix application into the IPIP-tunnel. You could very well add this to the same script as the IPIP-tunnel on the client:

code:
1
2
3
iptables -t mangle -A OUTPUT -m owner --gid-owner netflix  -j MARK --set-mark 2
iptables -t nat -A POSTROUTING -o tunUS-in-tun0 -j SNAT --to 172.31.252.2
ip rule add fwmark 2 table 2



To verify that everything so far works, you can open a ping as user netflix on the client to a random (public) ip, and verify using tcpdump on the server that the traffic is coming in from the right ip on the tunUS-in-tun0 interface.
Forwarding traffic from Client to VPN on VPS
To get the traffic from the VPS to Netflix via the VPN service we still need to route all traffic from the tunUS-in-tun0 to the tunUS interface on the VPS:

code:
1
2
3
4
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -s 172.31.252.0/24 -o tunUS -j SNAT --to-source 10.198.1.6
ip rule add from 172.31.252.0/24 lookup 2
ip route add default via 10.198.1.5 dev tunUS  src 10.198.1.6



In this case, the 10.198.1.5 and .6 are my gateway and client ip from my VPN Service, I retrieved those using:

code:
1
2
3
4
# ip a s tunUS
17: tunUS: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100
    link/none 
    inet 10.198.1.6 peer 10.198.1.5/32 scope global tunUS



I'm thinking you could perfectly well retrieve those from OpenVPN in its export variables when invoking it using the up-script configuration directive. Beware, in the case of PIA these addresses are changed once a day or so, so hardcoding them may not be the way to go.
RP_Filter
Linux has a feature called reverse-path filtering. This feature drops traffic from any ip it has no route for on that interface. Because it does not keep into account the use of multiple routing tables, we'll need to disable it: on both the client and the server.


code:
1
2
3
for f in /proc/sys/net/ipv4/conf/*/rp_filter; do
    echo 0 > $f
    done

Troubleshooting the setup
With all debugging, you begin initiating traffic on the client, as the Netflix application does:

code:
1
client# sudo -u netflix ping 8.8.8.8



Then, you should see the traffic going out on the right interface:

code:
1
2
3
4
5
6
client# sudo tcpdump -i tunUS-in-tun0 -ns0 icmp
[sudo] password for dolf: 
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tunUS, link-type RAW (Raw IP), capture size 65535 bytes
03:13:47.599418 IP 172.31.252.2 > 8.8.8.8: ICMP echo request, id 25294, seq 115, length 64
03:13:47.835465 IP 8.8.8.8 > 172.31.252.2: ICMP echo reply, id 25294, seq 115, length 64



Please take note of the IP addresses. If these don't match, things are bound to go wrong.

So once you've confirmed the traffic leaves your desktop from the right interface using the right ip addresses, it's time to hop on to the VPS, see if the traffic comes in there:

code:
1
2
3
4
5
6
7
server# sudo tcpdump -i tunUS-in-tun0 -ns0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tunUS-in-tun0, link-type RAW (Raw IP), capture size 65535 bytes
03:12:38.890815 IP 172.31.252.2 > 8.8.8.8: ICMP echo request, id 25294, seq 64, length 64
03:12:39.104200 IP 8.8.8.8 > 172.31.252.2: ICMP echo reply, id 25294, seq 64, length 64
03:12:39.894865 IP 172.31.252.2 > 8.8.8.8: ICMP echo request, id 25294, seq 65, length 64
03:12:40.108261 IP 8.8.8.8 > 172.31.252.2: ICMP echo reply, id 25294, seq 65, length 64



The traffic you see here should match 1:1 with the output of the previous command. If it does not (or you have no traffic at all on this interface), there's something wrong with your IPIP tunnel.

The last step to see if all goes as it should, is to see if the traffic towards the VPN service leaves your VPS correctly:

code:
1
2
3
4
5
6
7
server# sudo tcpdump -i tunUS -ns0 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tunUS, link-type RAW (Raw IP), capture size 65535 bytes
03:17:38.141751 IP 10.198.1.6 > 8.8.8.8: ICMP echo request, id 25294, seq 363, length 64
03:17:38.355633 IP 8.8.8.8 > 10.198.1.6: ICMP echo reply, id 25294, seq 363, length 64
03:17:39.136641 IP 10.198.1.6 > 8.8.8.8: ICMP echo request, id 25294, seq 364, length 64
03:17:39.350106 IP 8.8.8.8 > 10.198.1.6: ICMP echo reply, id 25294, seq 364, length 64



The outgoing IP here should match that of the tunUS address. If you don't see any traffic at all, it's likely that one of your iptables, rule or routes was dropped. This can happen if your tunUS interface is removed (this happens if you e.g. restart OpenVPN - or PIA does).
RP_Filter
If you do see traffic coming in at one place, but it doesn't reach the application or next hop, it may not be related to any of your firewall rules, ip rules, or routes. It will probably mean the return path filtering is kicking in. You can debug this using:

code:
1
echo 1 >/proc/sys/net/ipv4/conf/<interfacename>/log_martians




Edit September 24, 2013: I did some more digging. It turns out the rp_filtering will by defintion have to be disabled. The post has been modified to reflect this.

Volgende: Netflix - Using a VPN for just one application 09-'13 Netflix - Using a VPN for just one application

Comments


By Tweakers user H!GHGuY, Tuesday 17 September 2013 14:36

Why do you need an IP-over-IP tunnel? You could use plain simple routing instead and do away with the extra overhead the tunnel gives you.

Not sure what streaming netflix uses (I thought it was ABR and thus carried by HTTP/TCP) but if you do the math you'll see that you're creating a lot of overhead...

By Tweakers user Freeaqingme, Tuesday 17 September 2013 18:20

H!GHGuY wrote on Tuesday 17 September 2013 @ 14:36:
Why do you need an IP-over-IP tunnel? You could use plain simple routing instead and do away with the extra overhead the tunnel gives you.
I more or less outlined the reasons for this in my previous blog post already. The problem is that Netflix uses a lot of dynamic ip's that are also used for other services (e.g. Netflix uses Amazon). I don't want to route all my traffic to Amazon via the US, just the traffic meant for Netflix.

Also, in this example I'm using Netflix, but you could apply the exact same mechanism to any other IP-based application like a torrent app. It's impossible to route that using plain destination based routing.
H!GHGuY wrote on Tuesday 17 September 2013 @ 14:36:
Not sure what streaming netflix uses (I thought it was ABR and thus carried by HTTP/TCP) but if you do the math you'll see that you're creating a lot of overhead...
It uses ABR indeed via HTTPS. My VPN clocks in at an MTU of 1500 bytes, the extra IPv4 headers take 20 bytes, resulting in an MTU of 1480 bytes for the IPIP-tunnel. That accounts to an overhead of about 1.4% (20/1480), so I'm inclined to think that's more than reasonable, and does not qualify for 'a lot of overhead'. But of course, opinions may differ ;)

[Comment edited on Tuesday 17 September 2013 18:23]


By Tweakers user H!GHGuY, Wednesday 18 September 2013 12:45

I think your VPN might be misconfigured.
Depending on the technology you're using (PPTP/IPsec/GRE/...) you're adding more bytes.
So when you really send 1500byte packets over your VPN, they probably get fragmented.

So the overhead must be higher, don't you think?

By Tweakers user Freeaqingme, Wednesday 18 September 2013 14:48

Yes and no. There is something borked on the network I'm currently on, that results in failing MTU tests (OpenVPN reports an MTU of 1541 bytes, which seems, well, unlikely on a WAN interface). The only positive thing is that I'm not seeing any fragmentation, but that may be because of ABR and the fact that the tunnel itself is UDP. I need to get that fixed, but that's unrelated to the blogpost ;)

However, the extra IP(v4) header is a fixed added 20 bytes. So when you initialize an IPIP-tunnel (for IPv4), it sets the MTU automatically 20 bytes less than the parent interface

code:
1
2
3
4
5
6
7
8
# ip a s
[...]
56: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100
    link/none 
    inet 172.31.254.10 peer 172.31.254.9/32 scope global tun0
39: tunUS: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN 
    link/ipip 172.31.254.10 peer 172.31.254.1
    inet 172.31.252.2/30 scope global tunUS



So, the overhead of the IPIP tunnel by definition is 20 bytes. That's all the overhead you get, as long as the MTU of the parent if is set correctly :)

Comments are closed