Pritunl: Running a VPN in Kubernetes

Pritunl – VPN server with a bunch of additional features for security and access control.

Basically, Prytunl is just a wrapper over OpenVPN, adding to it some kind of Access Control Lists in the form of Organizations, users and routes.

The task is to deploy a test instance of Pritunl in Kubernetes to get a feel for it from the inside.

For now, we will use the free version, then we will look at the paid version. You can see the differences and cost here тут>>> .

We will run it in Minikube, and use it to install Helm chart from Dysnix .

We create a namespace:

kubectl create ns pritunl-local

namespace/pritunl-local created

We add the repository:

helm repo add dysnix

And we set the chart with Pritunl:

helm -n pritunl-local install pritunl dysnix/pritunl

Pritunl default access credentials:

export POD_ID=$(kubectl get pod –namespace pritunl-local -l app=pritunl,release=pritunl -o jsonpath=”{.items[0].metadata.name}”)

kubectl exec -t -i –namespace pritunl-local  $POD_ID pritunl default-password

export VPN_IP=$(kubectl get svc –namespace pritunl-local pritunl –template “{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}”)

echo “VPN access IP address: ${VPN_IP}”

We check pods:

kubectl -n pritunl-local get pod

NAME                               READY   STATUS    RESTARTS   AGE

pritunl-54dd47dc4d-672xw           1/1     Running   0          31s

pritunl-mongodb-557b7cd849-d8zmj   1/1     Running   0          31s

We get the login password from the master pod:

kubectl exec -t -i –namespace pritunl-local pritunl-54dd47dc4d-672xw pritunl default-password

Administrator default password:

username: “pritunl”

password: “zZymAt1tH2If”

Services:

kubectl -n pritunl-local get svc

NAME              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE

pritunl           LoadBalancer   10.104.33.93         1194:32350/TCP   116s

pritunl-mongodb   ClusterIP      10.97.144.132           27017/TCP        116s

pritunl-web       ClusterIP      10.98.31.71             443/TCP          116s

Here is LoadBalancer about – for client access to the VPN server, and the service pritunl-web ClusterIP – for accessing the web interface.

Open the port to the Internet:

kubectl -n pritunl-local port-forward svc/pritunl-web 8443:443

Forwarding from 127.0.0.1:8443 -> 443

Forwarding from [::1]:8443 -> 443

We open https://localhost:8443:

Log in and get to the main settings:

Here, Public Address will automatically set the public address of the host on which Pritunl is running, and then it will be substituted in the client configurations as the VPN host address.

Since Pritunl works for us in Kubernetes, which works in VirtualBox, which works on Linux on an ordinary home PC, this option is not suitable for us, but we will return to it later. For now, you can leave it as it is.

We are not interested in other settings yet.

Organization, Users

See Initial Setup.

There are Groups to unite users – but they are available in a paid version, we will see it later.

Also, users can be grouped through Organizations.

Go to Users, add Organization:

Add a user:

PIN, email – optional, not needed now.

Pritunl Server and routes

See Server configuration.

Go to Servers, add a new one:

Here:

  • DNS Server : to which DNS we will send clients
  • Port, Protocol : port and protocol for OpenVPN, which will run “inside” Prytunl and accept connections from our users
  • Virtual Network : network, from the address pool of which we will allocate private IPs for clients

I would single out Virtual Network 172.16.0.0 – then our home network, Kubera network and client IPs will be different – it will be more convenient to debug, see IPv4 Private Address Space and Filtering .

At the same time, it is important that the Server port here matches the port and protocol on the LoadBalancer – 1194 TCP .

That is, the request from the working machine will follow the route:

  • 192.168.3.0/24 – home network
  • will get to the VirtualBox network 192.168.59.1/24 (see Proxy )
  • will go to the LoadBalancer on the Kubera network 10.96.0.0/12
  • and LoadBalancer will route the request to the Kubernetes Pod, in which our OpenVPN listens on TCP port 1194

We check the LoadBalancer itself:

kubectl -n pritunl-local get svc pritunl

NAME      TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE

pritunl   LoadBalancer   10.104.33.93        1194:32350/TCP   22m

Port 1194 is TCP. With status Pending we’ll figure it out a little later.

Specify the Virtual Network, port and protocol for the Server:

Next, we connect the Organization with all its users:

Let’s start the server:

We check the process and port in the Kubernetes Pod – we see our OpenVNP Server on port 1194:

kubectl -n pritunl-local exec -ti pritunl-54dd47dc4d-672xw — netstat -anp | grep 1194

Defaulted container “pritunl” out of: pritunl, alpine (init)

tcp6       0      0 :::1194                 :::*                    LISTEN      1691/openvpn

And let’s fix LoabBalancer.

minikube tunnel

See Kubernetes: Minikube, and LoadBalancer in “Pending” status for complete information, now just calling minikube tunnel:

minikube tunnel

[sudo] password for setevoy:

Status:

machine: minikube

pid: 1467286

route: 10.96.0.0/12 -> 192.168.59.108

minikube: Running

services: [pritunl]

errors:

minikube: no errors

router: no errors

loadbalancer emulator: no errors

We check the Loadbalancer:

kubectl -n pritunl-local get svc pritunl

NAME      TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)          AGE

pritunl   LoadBalancer   10.104.33.93   10.104.33.93   1194:32350/TCP   139m

appeared EXTERNAL-IP– check the connection:

telnet 10.104.33.93 1194

Trying 10.104.33.93…

Connected to 10.104.33.93.

Escape character is ‘^]’.

We return to the main Settings, specify Public Address == LoadBalancer IP:

OpenVPN – connection to the server

Go to Users, click Download profile:

We unpack the archive:

And we connect using the usual OpenVPN client:

sudo openvpn –config local-org_local-user_local-server.ovpn

[sudo] password for setevoy:

2022-10-04 15:58:32 Attempting to establish TCP connection with [AF_INET]10.104.33.93:1194 [nonblock]

2022-10-04 15:58:32 TCP connection established with [AF_INET]10.104.33.93:1194

2022-10-04 15:58:33 net_addr_v4_add: 172.16.0.2/24 dev tun0

2022-10-04 15:58:33 WARNING: this configuration may cache passwords in memory — use the auth-nocache option to prevent this

2022-10-04 15:58:33 Initialization Sequence Completed

However, the network will not work now:

traceroute 1.1.1.1

traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 60 byte packets

1  * * *

2  * * *

3  * * *

Since we have a VPN route in 0.0.0.0/0 directed through the same host on which the actual VPN works – a “ring” is produced.

Go to Servers, stop the server and delete Default route:

We click Add Route – we will add a route to 1.1.1.1 through our VPN, and the rest of the requests from the client will follow normal routes:

We start the connection again:

sudo openvpn –config local-org_local-user_local-server.ovpn

We check the routes on the host machine, locally:

route -n

Kernel IP routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

0.0.0.0 192.168.3.1 0.0.0.0 AND 100 0 0 enp38s0

1.1.1.1 172.16.0.1 255.255.255.255 UGH 0 0 0 tun0

And we check the network – the request went through the VPN:

traceroute 1.1.1.1

traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 60 byte packets

1  172.16.0.1 (172.16.0.1)  0.211 ms  41.141 ms  41.146 ms

2  * * *

“It works!” (with)

Done.

HOWTO’s,Kubernetes,Networking,OpenVPN,Security,VirtualBox,Virtualization,minikube,Networks,VPN,

#Pritunl #Running #VPN #Kubernetes

Leave a Comment

Your email address will not be published. Required fields are marked *