running in Kubernetes with PostgreSQL Operator and monitoring with Prometheus

running in Kubernetes with PostgreSQL Operator and monitoring with Prometheus

[*]

So, we launched the Druid. Apache Druid: Overview, Running on Kubernetes, and Monitoring with Prometheus. So far, in the default form, that is, in the role storage for metadata local database is used Apache Derby .

Then we will switch Druid to PostgreSQL, and even later – remove ZooKeeper from there.

Well, for starters, let’s start the PostgreSQL cluster in Kubernetes, add PostgreSQL Exporter for Promethues and set up the collection of metrics.

We will run again in Minikube , for PostgreSQL we use Zalando Operator and the Exporter will be launched in the form sidecar container .

We will not dig deep into the Operator yet, although it is very interesting, since we will somehow have fun with it. Now we only need to monitor it.

Documentation – Administrator Guide .

Create a namespace:

kubectl create ns postgres-operator

namespace/postgres-operator created

Add Helm repository:

helm repo add postgres-operator-charts

Installing the operator itself:

helm -n postgres-operator install postgres-operator postgres-operator-charts/postgres-operator

Optionally add WebUI for the operator:

helm repo add postgres-operator-ui-charts -ui

helm -n postgres-operator install postgres-operator-ui postgres-operator-ui-charts/postgres-operator-ui

Checking the pods:

kubectl -n postgres-operator get pods

NAME                                    READY   STATUS    RESTARTS   AGE

postgres-operator-649799f4bd-dz5bl      1/1     Running   0          82s

postgres-operator-ui-5cfff55c65-v4bjj   1/1     Running   0          22s

We open access to the Operator’s web-muzzle Service:

kubectl port-forward svc/postgres-operator-ui 8081:80

Forwarding from 127.0.0.1:8081 -> 8081

Forwarding from [::1]:8081 -> 8081

We check:

Here we will not do anything – we will take ready-made examples of the cluster config.

Clone the repository:

https://github.com/zalando/postgres-operator.git

cd postgres-operator/

In catalog manifeststhere are several examples, let’s take manifests/minimal-master-replica-svcmonitor.yaml– it describes the namespace, cluster-users-bases, two Services and two ServiceMonitors + Sidecars with Prometheus Exporter.

We apply it:

kubectl apply -f manifests/minimal-master-replica-svcmonitor.yaml

namespace/test-pg created

postgresql.acid.zalan.do/acid-minimal-cluster created

service/acid-minimal-cluster-svc-metrics-master created

service/acid-minimal-cluster-svc-metrics-replica created

servicemonitor.monitoring.coreos.com/acid-minimal-cluster-svcm-master created

servicemonitor.monitoring.coreos.com/acid-minimal-cluster-svcm-replica created

Checking the cluster:

kk -n test-pg get postgresql

NAME                   TEAM   VERSION   PODS   VOLUME   CPU-REQUEST   MEMORY-REQUEST   AGE     STATUS

acid-minimal-cluster   acid   13        2      1Gi                                     2m21s   Running

And his pods:

kk -n test-pg get po

NAME                     READY   STATUS    RESTARTS   AGE

acid-minimal-cluster-0   2/2     Running   0          37s

acid-minimal-cluster-1   1/2     Running   0          24s

Everyone has their own role, which is set in the labels – spilo-role=masteror spilo-role=replica.

PostgreSQL users

See documentation тут>>> and тут>>>.

Users are described in the block – surprise – users:

kubectl -n test-pg get postgresql -o yaml

users:

foo_user: []

zalando:

– superuser

– createdb

And for each of them, a separate Kubernetes Secret is created:

kk -n test-pg get secret

NAME                                                                 TYPE     DATA   AGE

foo-user.acid-minimal-cluster.credentials.postgresql.acid.zalan.do   Opaque   2      38m

postgres.acid-minimal-cluster.credentials.postgresql.acid.zalan.do   Opaque   2      38m

standby.acid-minimal-cluster.credentials.postgresql.acid.zalan.do    Opaque   2      38m

zalando.acid-minimal-cluster.credentials.postgresql.acid.zalan.do    Opaque   2      38m

Which are then mapped into pods through variables:

kubectl -n test-pg get statefulsets acid-minimal-cluster -o yaml

– env:

– name: POD_NAME

valueFrom:

fieldRef:

apiVersion: v1

fieldPath: metadata.name

– name: POD_NAMESPACE

valueFrom:

fieldRef:

apiVersion: v1

fieldPath: metadata.namespace

– name: POSTGRES_USER

value: postgres

– name: POSTGRES_PASSWORD

valueFrom:

secretKeyRef:

key: password

name: postgres.acid-minimal-cluster.credentials.postgresql.acid.zalan.do

Well, let’s check.

Get the password:

kubectl -n test-pg get secret postgres.acid-minimal-cluster.credentials.postgresql.acid.zalan.do -o ‘jsonpath={.data.password}’ | base64 -d

CcWdAaqvPA8acxwIpVyM8UHkds2QG3opC3KD7rO1TxITQ1q31cwYLTswzfBeTVsN

Opening the port:

kubectl -n test-pg port-forward acid-minimal-cluster-0 6432:5432

Login and check databases:

psql -U postgres -h localhost -p 6432

Password for user postgres:

psql (14.5, server 13.7 (Ubuntu 13.7-1.pgdg18.04+1))

SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)

Type “help” for help.

postgres=# \l

List of databases

Name    |   Owner   | Encoding |   Collate   |    Ctype    |   Access privileges

———–+———–+———-+————-+————-+———————–

bare | bar_owner | UTF8 | en_US.utf-8 | en_US.utf-8 |

foo | being | UTF8 | en_US.utf-8 | en_US.utf-8 |

postgres  | postgres  | UTF8     | en_US.utf-8 | en_US.utf-8 |

template0 | postgres  | UTF8     | en_US.utf-8 | en_US.utf-8 | =c/postgres          +

|           |          |             |             | postgres=CTc/postgres

template1 | postgres  | UTF8     | en_US.utf-8 | en_US.utf-8 | =c/postgres          +

|           |          |             |             | postgres=CTc/postgres

(5 rows)

postgres=#

Cm. Sidecar definitions.

We already have a sidecar – added from the manifest, and in each pod there are two containers – the PostgreSQL itself, and its Exporter:

kk -n test-pg get po acid-minimal-cluster-0 -o jsonpath=”{.spec.containers[*].name}”

postgres exporter

Let’s see if there are metrics there – open the port:

kubectl -n test-pg port-forward svc/acid-minimal-cluster-svc-metrics-master 9187:9187

Forwarding from 127.0.0.1:9187 -> 9187

Forwarding from [::1]:9187 -> 9187

And we see that we don’t see anything, and the cluster is generally “dead” – pg_up == 0:

curl -s localhost:9187/metrics | grep pg_ | grep -v ‘#’

pg_exporter_last_scrape_duration_seconds 1.00031302

pg_exporter_last_scrape_error 1

pg_exporter_scrapes_total 9

pg_up 0

Why – because the exporter must have access data, that is, a login-password.

In the sidecar configuration, add new variables, see below. Environment Variables:

...
      env:
      - name: "DATA_SOURCE_URI"
        value: "$(POD_NAME)/postgres?sslmode=require"
      - name: "DATA_SOURCE_USER"
        value: "$(POSTGRES_USER)"
      - name: "DATA_SOURCE_PASS"
        value: "$(POSTGRES_PASSWORD)"
      - name: "PG_EXPORTER_AUTO_DISCOVER_DATABASES"
        value: "true"
...

That is, the Operator creates a StatefulSet in which it sets the variables POSTGRES_USERand POSTGRES_PASSWORDwhich we use for the sidecar to set its own variables.

Save and update:

kubectl apply -f manifests/minimal-master-replica-svcmonitor.yaml

Checking the variables in the pod itself:

kubectl -n test-pg get po acid-minimal-cluster-0 -o yaml

– env:

– name: POD_NAME

valueFrom:

fieldRef:

apiVersion: v1

fieldPath: metadata.name

– name: POD_NAMESPACE

valueFrom:

fieldRef:

apiVersion: v1

fieldPath: metadata.namespace

– name: POSTGRES_USER

value: postgres

– name: POSTGRES_PASSWORD

valueFrom:

secretKeyRef:

key: password

name: postgres.acid-minimal-cluster.credentials.postgresql.acid.zalan.do

– name: DATA_SOURCE_URI

value: $(POD_NAME)/postgres?sslmode=require

– name: DATA_SOURCE_USER

value: $(POSTGRES_USER)

– name: DATA_SOURCE_PASS

value: $(POSTGRES_PASSWORD)

– name: PG_EXPORTER_AUTO_DISCOVER_DATABASES

value: “true”

And again we check the metrics in the exporter:

curl -s localhost:9187/metrics | grep pg_ | grep -v ‘#’ | tail -5

pg_stat_replication_pg_current_wal_lsn_bytes{application_name=”acid-minimal-cluster-0″,client_addr=”172.17.0.17″,server=”acid-minimal-cluster-1:5432″,slot_name=”182″,state=”streaming”} 1.52655344e+08

pg_stat_replication_pg_wal_lsn_diff{application_name=”acid-minimal-cluster-0″,client_addr=”172.17.0.17″,server=”acid-minimal-cluster-1:5432″,slot_name=”182″,state=”streaming”} 0

pg_stat_replication_reply_time{application_name=”acid-minimal-cluster-0″,client_addr=”172.17.0.17″,server=”acid-minimal-cluster-1:5432″,slot_name=”182″,state=”streaming”} 1.663625745e+09

pg_static{server=”acid-minimal-cluster-1:5432″,short_version=”13.7.0″,version=”PostgreSQL 13.7 (Ubuntu 13.7-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit”} 1

pg_up 1

pg_up == 1 – yay! It Works!

Prometehus ServiceMonitors

Open a port to access Prometheus itself:

kubectl -n monitoring port-forward svc/kube-prometheus-stack-prometheus 9090:9090

Forwarding from 127.0.0.1:9090 -> 9090

Forwarding from [::1]:9090 -> 9090

Check Status > Service Discovery – PostgreSQL is not visible here:

ServiceMonitors were also created from the manifest:

kubectl -n test-pg get servicemonitor

NAME                                AGE

acid-minimal-cluster-svcm-master    65m

acid-minimal-cluster-svcm-replica   65m

We repeat the “dirty hack”, as we did for the Druid – we add labels to them "release": "kube-prometheus-stack"wait a minute or two, and double-check:

And the metrics in the charts went:

Done.

[*] Databases,HOWTO’s,Kubernetes,Monitoring,PostgreSQL,Prometheus,Virtualization,databases,monitoring,
[*]#running #Kubernetes #PostgreSQL #Operator #monitoring #Prometheus

Leave a Comment

Your email address will not be published. Required fields are marked *