Pritunl: Running a VPN in Kubernetes

Pritunl – VPN server with a bunch of additional security and access control features.

In fact, it is just a wrapper over OpenVPN, adding such Access Control Lists to it in the form of Organizations, users and routes.

The task is to deploy a Pritunl test instance in Kubernetes in order to touch it from the inside.

While we will use the free version, later we will look at the paid one. Differences and prices can be seen here тут>>>.

We will run in Minikube, and for installation we will use Helm-chart ot Dysnix.

Create a namespace:

kubectl create ns pritunl-local

namespace/pritunl-local created

Adding a repository:

helm repo add dysnix

And install the chart with Pritunl:

helm -n pritunl-local install pritunl dysnix/pritunl

Pritunl default access credentials:

export POD_ID=$(kubectl get pod –namespace pritunl-local -l app=pritunl,release=pritunl -o jsonpath=”{.items[0].metadata.name}”)

kubectl exec -t -i –namespace pritunl-local  $POD_ID pritunl default-password

export VPN_IP=$(kubectl get svc –namespace pritunl-local pritunl –template “{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}”)

echo “VPN access IP address: ${VPN_IP}”

Checking the pods:

kubectl -n pritunl-local get pod

NAME                               READY   STATUS    RESTARTS   AGE

pritunl-54dd47dc4d-672xw           1/1     Running   0          31s

pritunl-mongodb-557b7cd849-d8zmj   1/1     Running   0          31s

We get the login-password from the master pod:

kubectl exec -t -i –namespace pritunl-local pritunl-54dd47dc4d-672xw pritunl default-password

Administrator default password:

username: “pritunl”

password: “zZymAt1tH2If”

Finding Services:

kubectl -n pritunl-local get svc

NAME              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE

pritunl           LoadBalancer   10.104.33.93         1194:32350/TCP   116s

pritunl-mongodb   ClusterIP      10.97.144.132           27017/TCP        116s

pritunl-web       ClusterIP      10.98.31.71             443/TCP          116s

Here is LoadBalancer about – for clients to access the VPN server, and the service pritunl-web ClusterIP – to access the web interface.

Forwarding a port to the web:

kubectl -n pritunl-local port-forward svc/pritunl-web 8443:443

Forwarding from 127.0.0.1:8443 -> 443

Forwarding from [::1]:8443 -> 443

We open https://localhost:8443:

Log in and get into the initial settings:

Here, in the Public Address, the public address of the host on which Prytunl itself is running will be automatically set, and then it will be substituted into the client configs as the VPN host address.

Since Pritunl works for us in Kubernetes, which runs in VirtualBox, which runs on Linux on a regular home PC, it does not suit us, but we will return to it later. For now, you can leave it as it is.

The rest of the settings are of no interest to us yet.

Organization, Users

Cm. Initial Setup.

There are Groups to unite users – but they are in the full version, we will see it later.

Also, users can be grouped through Organizations.

Go to Users, add Organization:

Adding a user:

PIN, email – optional, not needed now.

Pritunl Server and routes

Cm. Server configuration.

Go to Servers, add a new one:

Here:

  • DNS Server: to which DNS we will send clients
  • Port, Protocol: port and protocol for OpenVPN, which will be launched “inside” Prytunl and will accept connections from our users
  • Virtual Network: network, from the address pool of which we will allocate private IPs for clients

Virtual Network I would single out 172.16.0.0 – then we have a home network, Kuber’s network and client IPs will differ – it will be more convenient to debug, see below. IPv4 Private Address Space and Filtering.

At the same time, it is important that the Server port here matches the port and protocol on the LoadBalancer – 1194 TCP.

Those. the request from the working machine will go along the route:

  • 192.168.3.0/24 – home network
  • gets into the VirtualBox network 192.168.59.1/24 (cm. Proxy)
  • will go to LoadBalancer in the Kuber network 10.96.0.0/12
  • and LoadBalancer will send a request to the Kubernetes Pod, in which we have OpenVPN listening on TCP port 1194

Checking LoadBalancer itself:

kubectl -n pritunl-local get svc pritunl

NAME      TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE

pritunl   LoadBalancer   10.104.33.93        1194:32350/TCP   22m

Port 1194 – TCP. With status Pending we’ll figure it out a little later.

Specify Virtual Network, port and protocol for Server:

Next, connect the Organization with all its users:

We start the server:

We check the process and port in the Kubernetes Pod – we see our OpenVNP Server on port 1194:

kubectl -n pritunl-local exec -ti pritunl-54dd47dc4d-672xw — netstat -anp | grep 1194

Defaulted container “pritunl” out of: pritunl, alpine (init)

tcp6       0      0 :::1194                 :::*                    LISTEN      1691/openvpn

And let’s go fix LoabBalancer.

minikube tunnel

See Kubernetes: Minikube and LoadBalancer in “Pending” status for full details, for now just call minikube tunnel:

minikube tunnel

[sudo] password for setevoy:

Status:

machine: minikube

pid: 1467286

route: 10.96.0.0/12 -> 192.168.59.108

minikube: Running

services: [pritunl]

errors:

minikube: no errors

router: no errors

loadbalancer emulator: no errors

Check Loadbalancer:

kubectl -n pritunl-local get svc pritunl

NAME      TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)          AGE

pritunl   LoadBalancer   10.104.33.93   10.104.33.93   1194:32350/TCP   139m

Appeared EXTERNAL-IP – check the connection:

telnet 10.104.33.93 1194

Trying 10.104.33.93…

Connected to 10.104.33.93.

Escape character is ‘^]’.

We return to the main Settings, specify Public Address == LoadBalancer IP :

OpenVPN – connect to server

Go to Users, click Download profile:

Unpack the archive:

And we connect using a regular OpenVPN client:

sudo openvpn –config local-org_local-user_local-server.ovpn

[sudo] password for setevoy:

2022-10-04 15:58:32 Attempting to establish TCP connection with [AF_INET]10.104.33.93:1194 [nonblock]

2022-10-04 15:58:32 TCP connection established with [AF_INET]10.104.33.93:1194

2022-10-04 15:58:33 net_addr_v4_add: 172.16.0.2/24 dev tun0

2022-10-04 15:58:33 WARNING: this configuration may cache passwords in memory — use the auth-nocache option to prevent this

2022-10-04 15:58:33 Initialization Sequence Completed

But now the network will not work:

traceroute 1.1.1.1

traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 60 byte packets

1  * * *

2  * * *

3  * * *

Since we have a VPN route in 0.0.0.0/0 sent through the same host on which the VPN actually works – it turns out a “ring”.

Go to Servers, stop the server and delete the Default route:

click Add Route – add a route to 1.1.1.1 through our VPN, and all other requests from the client will go by normal routes:

Restart the connection:

sudo openvpn –config local-org_local-user_local-server.ovpn

We check the routes on the host machine, locally:

route -n

Kernel IP routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

0.0.0.0 192.168.3.1 0.0.0.0 AND 100 0 0 enp38s0

1.1.1.1 172.16.0.1 255.255.255.255 UGH 0 0 0 tun0

And we check the network – the request went through the VPN:

traceroute 1.1.1.1

traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 60 byte packets

1  172.16.0.1 (172.16.0.1)  0.211 ms  41.141 ms  41.146 ms

2  * * *

“It works!” (c)

Done.

HOWTO’s,Kubernetes,Networking,OpenVPN,Security,VirtualBox,Virtualization,minikube,Networks,VPN,

#Pritunl #Running #VPN #Kubernetes

Leave a Comment

Your email address will not be published. Required fields are marked *