
Setting Up Kubernetes Authentication for Hashicorp Vault: Complete Implementation Guide
Table Of Content
- Step 1: Create Your Application Service Account
- Step 2: Set Up Vault’s Token Reviewer Account
- Step 3: Enable the Kubernetes Auth Method in Vault
- Step 4: Configure Vault to Talk to Kubernetes
- Step 5: Create a Vault Policy for Your App
- Step 6: Create the Vault Role That Ties It All Together
- Step 7: Test It Out!
I remember the first time I had to deal with secrets in Kubernetes. What a mess! We had environment variables scattered throughout deployment files, secrets base64 encoded (which isn’t really encryption), and the occasional “did someone just commit that API key to GitHub?” moment.
After one particularly painful security audit, it became clear we needed a better approach. That’s when I started diving into HashiCorp Vault and its Kubernetes integration.
The beauty of Vault’s Kubernetes auth method is that it leverages what you already have which is your cluster’s service accounts to establish trust. No more secrets in your container images or ConfigMaps. Instead, your pods authenticate to Vault using their identity, grab just the secrets they need, and do so in a way that’s both auditable and revocable.
I’ve now implemented this pattern across several clusters, and while the initial setup has a few moving parts, the operational benefits are absolutely worth it. In this post, I’ll walk through how to set this up from scratch. I’ll cover creating the necessary service accounts, configuring Vault, and testing everything works. I’ve included the exact commands I use when setting this up for new teams and hopefully they’ll save you some of the trial and error I went through!
Whether you’re running a small dev cluster or managing multiple production environments, this approach provides a solid foundation for keeping your secrets well.
Step 1: Create Your Application Service Account
First, you need a service account for your application. This is the identity your app will use to talk to Vault:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
apiVersion: v1 kind: ServiceAccount metadata: name: test-app-token-review namespace: mytest --- apiVersion: v1 kind: Secret metadata: name: test-app-token-review namespace: mytest annotations: kubernetes.io/service-account.name: test-app-token-review type: kubernetes.io/service-account-token |
I put this in the mytest namespace because that’s where my app runs. The explicit Secret creation is something I learned the hard way because it turns out Kubernetes 1.24+ doesn’t automatically create token secrets anymore.
Step 2: Set Up Vault’s Token Reviewer Account
This was the trickiest part for me. Vault needs its own service account to validate tokens. Think of it as Vault’s bouncer checking IDs at the door:
When your app presents its token to Vault saying “Hey, I’m the test-app from the mytest namespace,” Vault doesn’t just take its word for it. Vault turns to Kubernetes and says “Is this actually a legitimate token for this app?”
To have that conversation with Kubernetes, Vault needs its own identity and permission to ask those questions. That’s what this service account provides; it’s Vault’s official “security consultant” identity that Kubernetes recognizes and trusts to ask about token validity.
Without this service account and its permissions, Vault would have no way to verify if a token is legitimate or expired, or which service account it actually belongs to. It’s a crucial piece that enables the trust relationship between Vault and Kubernetes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
apiVersion: v1 kind: ServiceAccount metadata: name: test-vault-token-review secrets: - name: test-vault-token-review --- apiVersion: v1 kind: Secret metadata: name: test-vault-token-review annotations: kubernetes.io/service-account.name: test-vault-token-review type: kubernetes.io/service-account-token --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tokenreview-role rules: - apiGroups: ["authentication.k8s.io"] resources: ["tokenreviews"] verbs: ["create"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: vault-tokenreview-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tokenreview-role subjects: - kind: ServiceAccount name: test-vault-token-review namespace: default |
Â
Step 3: Enable the Kubernetes Auth Method in Vault
Before I dive into the steps, I should mention that this is not meant to be a complete Vault tutorial. I’m assuming you already have access to a Vault cluster and know your way around basic vault commands like vault write and vault policy. What we’re focusing on here is specifically the Kubernetes authentication integration – that bridge between your cluster and your secrets management.
The commands below are what you’ll need to run to configure Vault to recognize and trust your Kubernetes cluster. If you’re brand new to Vault, you might want to spend some time with HashiCorp’s docs first to get comfortable with concepts like auth methods, policies, and tokens. That said, if you’ve used Vault even a little bit, you should be able to follow along just fine.
1 |
vault auth enable -path=kubernetes_clustertest2 kubernetes |
I like using custom paths (like kubernetes_clustertest2 here) because we have multiple clusters, and this makes it clear which is which. You could just use kubernetes if you only have one.
Step 4: Configure Vault to Talk to Kubernetes
This is where you connect the dots between Vault and your cluster:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
# Get the CA cert for your cluster kubectl get configmap kube-root-ca.crt -n vault -o jsonpath='{.data.ca\.crt}' > k8s-ca.crt # Get the token that Vault will use from the test-vault-token-review secret we created earlier on TOKEN_REVIEWER_JWT=$(kubectl get secret test-vault-token-review -o jsonpath='{.data.token}' | base64 --decode) # Configure Vault with all the credentials needed to communicate with your kubernetes host vault write auth/kubernetes_clustertest2/config \ kubernetes_host="https://kubernetes.default.svc" \ kubernetes_ca_cert=@k8s-ca.crt \ token_reviewer_jwt="$TOKEN_REVIEWER_JWT" \ disable_iss_validation=true \ disable_local_ca_jwt=true |
I went with kubernetes.default.svc because Vault and Kubernetes are in the same cluster. This internal DNS name just works out of the box. In production, though, you’ll often find yourself with Vault hosted in one cluster and your applications running in completely different clusters. That’s when you’ll need to use the external API server URL instead; something like https://api.cluster-name.example.com:6443.
When I first set this up across our dev and prod environments, I learned the hard way that certificate validation gets trickier with external URLs. You need to make sure Vault trusts the Kubernetes API server’s TLS certificate, and sometimes that means extra configuration steps. But for a multi-cluster setup, it’s worth the effort since you get centralized secret management across your entire infrastructure.
For simplicity in this demo, I’ve stuck with the internal DNS approach, but just know that in a real enterprise setup, you’d likely be using those external endpoints.
Step 5: Create a Vault Policy for Your App
Next, define what your app is allowed to access:
Step 5: Create a Vault Policy for Your App
1 2 3 4 5 |
vault policy write testwebservice - <<EOF path "secret/data/testwebservice/config" { capabilities = ["read"] } EOF |
I’m being pretty restrictive here; just read access to one path. You could be more generous, but I’ve found it’s better to start tight and loosen as needed.
Step 6: Create the Vault Role That Ties It All Together
This is where the magic happens which is connecting Kubernetes identities to Vault permissions:
1 2 3 4 5 6 7 8 |
# Get the app's JWT token TOKEN_REVIEW_SJWT=$(kubectl get secret test-app-token-review -n mytest -o go-template='{{ .data.token }}' | base64 --decode) # Try to authenticate to vault from the test_namespace curl -k --header "X-Vault-Namespace: test_namespace" \ --request POST \ --data '{"jwt": "'$TOKEN_REVIEW_SJWT'", "role": "testwebservice"}' \ https://vault-cluster.example.com/v1/auth/kubernetes_clustertest2/login |
This says “when the test-app-token-review service account from the mytest namespace authenticates, give it the testwebservice policy for 24 hours.” The TTL is important because I’ve seen people set this too low and wonder why their apps keep losing access.
Step 7: Test It Out!
Finally, time to see if all that work paid off:
1 2 3 4 5 6 7 8 |
# Get the app's JWT token TOKEN_REVIEW_SJWT=$(kubectl get secret test-app-token-review -n mytest -o go-template='{{ .data.token }}' | base64 --decode) # Try to authenticate to vault from the test_namespace curl -k --header "X-Vault-Namespace: test_namespace" \ --request POST \ --data '{"jwt": "'$TOKEN_REVIEW_SJWT'", "role": "testwebservice"}' \ https://vault-cluster.example.com/v1/auth/kubernetes_clustertest2/login |
In plain English, the app’s JWT token is like an ID badge that Kubernetes automatically gives to your application.
Think of it this way: when your application runs in Kubernetes, the cluster gives it this special token that proves “Yes, this is really the application running as the service account we set up.”
When your application wants to get secrets from Vault, it shows this ID badge. Vault then checks with Kubernetes to verify “Is this ID badge legitimate?” before giving access to any secrets.
It’s not something you normally need to manage yourself as Kubernetes automatically creates and mounts these tokens into your pods. The only reason we’re extracting it manually in the example is for testing the authentication flow with curl.
If everything’s working, you’ll get back a Vault token. That’s what your app will use to access secrets.
The -k flag is just for testing but in production, you’d want proper TLS validation. The namespace header is only needed if you’re using Vault Enterprise with namespaces.
And there you have it! Once it’s set up, your apps can authenticate to Vault using their own identity, no shared secrets needed. The first time I got this working end-to-end was genuinely satisfying. No more kubectl apply-ing updated secrets or worrying about who has access to what.