inovex has released a sidecar container image to ease the use of short-lived Postgres database credentials – pgbouncer-vault-sidecar. With the use of Vault, this makes injecting database credentials into pods obsolete.
In today’s age, security is of utmost importance, especially for organizations handling sensitive data. One common way to increase security is to use short-lived credentials, which help limit the time attackers have to exploit a vulnerability. This approach reduces the risk of credentials being stolen or misused, making it a highly effective security measure.
HashiCorp Vault is a popular solution for securely managing short-lived credentials. It provides a centralized management system for secrets, such as database credentials. Access to secrets can be managed via policies and audit logs help to identify intrusions. However, integrating Vault with your application can be complex and time-consuming. Applications have to deal with:
- Authenticating against Vault
- Obtaining secrets from Vault
- Renewing the leased credentials
- Coping with rotating credentials
HashiCorp has created Vault Agent to deal with most of the problems. However, coping with rotating credentials is challenging. Applications should adopt the new credentials while continuing to run to avoid downtime.
Let’s take database credentials as an example. Many applications depend on a database, such as Postgres, and use credentials to authenticate against it over the network. If the credentials were to change frequently, the application has to detect the change and update the database connections gracefully without interrupting service. This requires code changes in every application that wants to make use of Vault. In the worst-case scenario, an application cannot be changed.
This is where pgbouncer-vault-sidecar comes in. It is a project developed by inovex that simplifies the integration process, making it easier for developers to use Vault to manage their Postgres credentials.
How it works
pgbouncer-vault-sidecar is a sidecar that can be deployed with your application. It serves as a connection pooler and takes care of authentication. It exposes the database to the main application via localhost where the pod serves as a security boundary. The sidecar is easy to configure and works with every application that uses Postgres – legacy or cloud-native.
The sidecar authenticates against Vault, obtains and renews leases for database credentials, connects to the database, and switches to new credentials once the lease can no longer be renewed.
Aside from authentication, being a connection pooler, the sidecar also helps to reduce resource consumption on the database server and can provide metrics on database connections.
Let’s see it in action
To show the benefit of the sidecar, we need Vault, a Postgres database cluster, and a demo application running inside Kubernetes.
We choose Minikube to provision a developer cluster:
1 2 3 |
$ minikube start ... Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default |
Once our cluster is ready, we deploy Vault and a Postgres database. kubectl should be configured to use the “default“ namespace.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
$ kubectl apply -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: vault --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: role-tokenreview-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: vault namespace: default --- apiVersion: v1 kind: Service metadata: name: vault spec: selector: app: vault ports: - protocol: TCP port: 8200 targetPort: 8200 --- apiVersion: v1 kind: Pod metadata: name: vault labels: app: vault spec: serviceAccountName: vault containers: - name: vault image: hashicorp/vault:1.13.2 command: - vault - server - -dev - -dev-root-token-id=root - -dev-listen-address=0.0.0.0:8200 env: - name: VAULT_ADDR value: http://0.0.0.0:8200 restartPolicy: OnFailure --- apiVersion: v1 kind: Service metadata: name: postgres spec: selector: app: postgres ports: - protocol: TCP port: 5432 targetPort: 5432 --- apiVersion: v1 kind: Pod metadata: name: postgres labels: app: postgres spec: containers: - name: postgres image: postgres:14.7 env: - name: POSTGRES_PASSWORD value: supersecret restartPolicy: OnFailure EOF |
Vault allows applications to authenticate via Kubernetes service accounts. We have to enable this in Vault.
1 2 |
$ kubectl exec vault -- vault auth enable kubernetes $ kubectl exec vault -- vault write auth/kubernetes/config kubernetes_host=https://kubernetes.default.svc.cluster.local. kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt token_reviewer_jwt=@/var/run/secrets/kubernetes.io/serviceaccount/token |
Furthermore, we have to add the Postgres cluster to Vault and define a database role. This role is used to create short-lived roles in Postgres (HashiCorp and Postgres use the term role differently). We purposefully set the TTL and max TTL to a low value to demonstrate that leases are automatically renewed and PGBouncer switches to new credentials seamlessly. In production, those values should be higher.
1 2 3 |
$ kubectl exec vault -- vault secrets enable database $ kubectl exec vault -- vault write database/config/my-database plugin_name=postgresql-database-plugin allowed_roles="readonly" connection_url="postgresql://postgres:supersecret@postgres.default.svc.cluster.local.:5432/?sslmode=disable" $ kubectl exec vault -- vault write database/roles/readonly db_name=my-database default_ttl="1m" max_ttl="3m" creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" |
In order to use the role “readonly“ we have to bind the application service account to yet another role (in the context of the Kubernetes authentication method) and assign a policy in order to obtain a lease for “readonly“.
1 2 3 4 5 6 |
$ kubectl exec vault -- vault write auth/kubernetes/role/my-applcation bound_service_account_names=my-application bound_service_account_namespaces=default policies=my-database-readonly ttl=1h $ kubectl exec vault -i -- vault policy write my-database-readonly - <<EOF path "database/creds/readonly" { capabilities = ["read", "list"] } EOF |
Now we have everything set up to start our application. For demonstration purposes, we continuously query the current user (from the perspective of Postgres).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
$ kubectl apply -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: my-application --- apiVersion: v1 kind: Pod metadata: name: my-application spec: serviceAccountName: my-application restartPolicy: OnFailure containers: - name: main-application image: postgres command: - bash - -c - while true; do psql --host=localhost --dbname=postgres --command='SELECT NOW(), current_user;'; sleep 20; done - name: pgbouncer-vault-sidecar image: ghcr.io/inovex/pgbouncer-vault-sidecar:0.3.0 env: - name: VAULT_ADDR value: http://vault.default.svc.cluster.local.:8200 - name: VAULT_PATH value: database/creds/readonly - name: VAULT_KUBERNETES_ROLE value: my-applcation - name: DB_NAME value: postgres - name: DB_HOST value: postgres.default.svc.cluster.local. - name: TLS_MODE value: disable EOF $ kubectl logs -c main-application --follow my-application ... now | current_user -----------------------------+---------------------------------------------------- 2023-05-04 13:20:14.9637+00 | v-kubernet-my-role-CC0I3yH3NgU01KHT5rRR-1683206244 (1 row) now | current_user -------------------------------+---------------------------------------------------- 2023-05-04 13:20:35.033061+00 | v-kubernet-my-role-5Ls2ZFVqnwWJ3bBqF3Kf-1683206409 (1 row) ... |
The main application can now connect to Postgres with minimal configuration! Looking at the logs, we can see that the user changes once the maximum lease time has been reached (3 minutes). PGBouncer automatically adapted the new credentials.
PGBouncer is configured for transaction pooling. That is, a connection to Postgres is blocked by the application until the transaction is committed (or aborted). PGBouncer switches to the new credentials gracefully (see RELOAD). That means that long-running transactions keep using the old role, while new transactions use the new role.
Conclusion
pgbouncer-vault-sidecar is a simple and secure way to introduce short-lived credentials into your application. By leveraging the sidecar pattern, you can easily integrate Vault into any application that uses Postgres without adding unnecessary complexity.
We encourage you to try out pgbouncer-vault-sidecar. It is available on GitHub. Contributions are welcome 🙂
How would the setup look like in a multi-tenant scenario? Like a shared PostgreSQL-Server running multiple DBs? Assuming Java, the app would have dedicated DataSource (like Hikari) with dedicated credentials and permissions per tenant. How would the sidecar handle that?