iTranslated by AI
Deploying and Exploring HashiCorp Vault on Kubernetes
Introduction
Vault is a Secret Management service provided by HashiCorp, famous for Terraform, which allows for the centralized management of various sensitive information such as credentials used by different services and arbitrary key-value pairs.
While there are managed services for managing sensitive information in each cloud provider, such as AWS Secrets Manager and GCP Secret Manager, Vault is positioned in the same category. Since it can be built using various methods including package managers, Docker images, and Helm (see the installation page), this time we will install it on a k8s cluster using Helm. Among the Vault-related components that can be installed via Helm, there is a feature that can integrate with k8s secrets in addition to the normal Vault server, so we will try using that as well.
Setup
- Installation steps: Run Vault on kubernetes
Various items can be customized when installing Vault with Helm, but we will start by installing it with the default settings. See Configuration for customizable items.
Additionally, Helm allows you to choose from the following four modes for the Vault configuration:
- Dev: A mode for development. Data is stored in memory and is not persisted.
- StandAlone: Runs in a single pod. Data is stored in a PersistentVolume and is persisted.
- HA: A High Availability configuration running across multiple pods.
- External: A configuration that utilizes an external Vault server.
In this case, we will create it in the default StandAlone mode. Since a PersistentVolume is required when starting the pod, we will create it in advance (this time, using a dynamic provisioner with openEBS). Add the repository with Helm and install hashicorp/vault.
$ helm repo add hashicorp https://helm.releases.hashicorp.com
$ helm install vault hashicorp/vault
The installation creates pods and services like the following:
$ kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/vault-0 0/1 Running 0 58s
pod/vault-agent-injector-55748c487f-hjxrj 1/1 Running 0 59s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/vault ClusterIP 10.97.192.100 <none> 8200/TCP,8201/TCP 59s
service/vault-agent-injector-svc ClusterIP 10.110.234.104 <none> 443/TCP 59s
service/vault-internal ClusterIP None <none> 8200/TCP,8201/TCP 59s
service/vault-ui ClusterIP 10.111.79.217 <none> 8200/TCP 10d
Initialization and Unsealing
If the installation is successful, the vault-0 pod will start, but it won't be Ready at this point. In configurations other than Dev among the four modes mentioned above, the Vault server starts in a "sealed" state. As the name suggests, you cannot perform most operations in this state, so you need to execute an operation called unseal.
Follow CLI initialize and unseal to execute the initialization process and unsealing. First, run vault operator init inside the vault-0 pod.
$ kubectl exec -ti vault-0 -- vault operator init
Unseal Key 1: C0eR4peV1QvcCVl5OCG9i6lkEr4/wlYIfIZs/YaGYe2F
Unseal Key 2: QZUUlu9Gdir9bC+qT2cBMZJqIvfhqmglv24/iKlsUKdv
Unseal Key 3: X1SN6V/DoPWmQjG9T4irbbmfWBX1JqOHXW8wAS92pm5w
Unseal Key 4: vXoXviXzA0MsBZKuFk0Cjg3uqzqUC4x8kharKMFcjE49
Unseal Key 5: 2gFG5OCZ4pdn2YDwKgIG5OvZ4I8omsa9TGMO6aHg3fQP
Initial Root Token: hvs.pmlUwjaoZKR3tQTUU0GnBnOV
Vault initialized with 5 key shares and a key threshold of 3. Please securely
distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 3 of these keys to unseal it
before it can start servicing requests.
Vault does not store the generated root key. Without at least 3 keys to
reconstruct the root key, Vault will remain permanently sealed!
It is possible to generate new unseal keys, provided you have a quorum of
existing unseal keys shares. See "vault operator rekey" for more information.
As stated in the message displayed here, the five Unseal Keys and the Initial Root Token above are important keys and tokens in Vault. For more details, see below.
Make a note of these values as they will be used later. Next, execute the unseal using the Unseal keys above. Running vault operator unseal inside the pod will prompt you for an unseal key, so enter one of the outputted unseal key values.
$ kubectl exec -ti vault-0 -- vault operator unseal
Unseal Key (will be hidden): # Enter Unseal key 1
Repeating this three times with unique key values will succeed in unsealing, and the pod will enter the Ready state. This completes the preparation for using the Vault server, and you will be able to read and write secrets.
Installing the Vault CLI
The Vault CLI is already installed inside the Vault pod, so the vault command can be executed there, but it's more convenient to be able to run commands from outside the pod, so we will install the Vault CLI on the node or elsewhere. Install the version that matches your OS and architecture from Install Vault. For example, to install the binary for amd64:
$ wget https://releases.hashicorp.com/vault/1.15.6/vault_1.15.6_linux_amd64.zip
$ unzip vault_1.15.6_linux_amd64.zip
$ sudo mv vault /usr/local/bin
$ rm vault_1.15.6_linux_amd64.zip
$ vault version
Vault v1.15.6 (615cf6f1dce9aa91bc2035ce33b9f689952218f0), built 2024-02-28T17:07:34Z
Autocompletion can be enabled by running the following and restarting your shell. Reference
vault -autocomplete-install
To connect to the Vault server, specify the connection destination using an option during command execution or in the VAULT_ADDR environment variable. The k8s service for connecting to the Vault pod is created as a ClusterIP, so if you are connecting from within the k8s cluster, you can specify this CLUSTER-IP (ingress configuration is required for connecting from outside the cluster).
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vault ClusterIP 10.111.155.187 <none> 8200/TCP,8201/TCP 4d23h
Additionally, Vault authentication is required to execute various commands. While there are many ways to do this, the quickest way is to use the root token from initialization.
When using environment variables, specify the address in VAULT_ADDR and the token value in VAULT_TOKEN.
export VAULT_ADDR="http://10.111.155.187:8200"
export VAULT_TOKEN="hvs.pmlUwjaoZKR3tQTUU0GnBnOV"
This allows for successful connection and authentication to the Vault server, enabling various commands such as reading and writing secrets.
Testing Vault k8s Features
We have successfully built a Vault server on the k8s cluster.
By creating an Ingress and making it accessible from outside the cluster, you can perform secret writing and reading without being particularly aware that it is running on k8s.
Since there are many articles on how to use Vault itself if you search for them, here we will try out some features unique to k8s Vault.
Vault CSI provider
The Vault CSI provider is one of Vault's features that utilizes the secrets-store-csi-driver. Although prior configuration is required, it allows secrets to be dynamically retrieved from the Vault server at pod creation time and referenced within the pod.
The usage is described on the following HashiCorp pages, so we will try using it with these as a reference.
- https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-secret-store-driver
- https://developer.hashicorp.com/vault/docs/auth/kubernetes
Japanese explanations by HashiCorp Japan are also available below:
Preparation
To use the Vault CSI provider, secrets-store-csi-driver is required, so install it using Helm.
helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --namespace kube-system
The Vault CSI provider is added by including --set "csi.enabled=true" when installing Vault via Helm.
helm install vault hashicorp/vault --set "csi.enabled=true"
Once the installation is complete, the vault-csi-provider pod (daemonset) will start.
vault-0 1/1 Running 0 20h
vault-csi-provider-k92cr 2/2 Running 0 20h
Preparation on the Vault side
To verify the operation of referencing secrets on the Vault side within a pod, write some appropriate secret to the Vault server. Any secret to reference is fine; here, we will write a username and password key-value pair to the path database/secret using the kv v2 type.
# Enable the kv v2 secret engine
$ vault secrets enable -path=secret kv-v2
# Write
$ vault kv put -mount=secret database/secret username=myusername password=mypassword
# Reference
$ vault kv get -mount=secret database/secret
======= Secret Path =======
secret/data/database/secret
...
====== Data ======
Key Value
--- -----
password mypassword
username myusername
Since the Vault CSI provider uses Vault's Kubernetes authentication for authentication, enable it with the following command.
$ vault auth enable kubernetes
To configure the k8s host on the Vault side, execute the following command inside the Vault pod (this must be done inside the pod because the KUBERNETES_PORT_443_TCP_ADDR environment variable is only set there).
# Start a shell in the pod
$ kubectl exec -it vault-0 -- sh
# Set environment variables to connect to the Vault server
/ $ export VAULT_ADDR="http://vault.vault:8200"
/ $ export VAULT_TOKEN="[root_token value]"
# Configure the k8s host for Kubernetes authentication
/ $ vault write auth/kubernetes/config kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
Confirm it is okay if kubernetes_host is set to an IP address by reading it.
/$ vault read auth/kubernetes/config
Key Value
--- -----
disable_iss_validation true
disable_local_ca_jwt false
issuer n/a
kubernetes_ca_cert n/a
kubernetes_host https://10.96.0.1:443
pem_keys []
Also, create the Vault-side role and policy used during Kubernetes authentication.
In the policy, set the permission to read the path secret/data/database/secret of the kv v2 created above.
$ vault policy write db-policy - <<EOF
path "secret/data/database/*" {
capabilities = ["read"]
}
EOF
The role is created by mapping it to a serviceAccount (SA) on the k8s side. The SA name can be anything at this point, but you will use the value specified here when creating it on the k8s side later (same for the namespace). For policies, specify the db-policy created above and bind it to a role named database.
- Vault-side role name: database
- k8s-side ServiceAccount: db
- k8s-side namespace : default
$ vault write auth/kubernetes/role/database \
bound_service_account_names=db \
bound_service_account_namespaces=default \
policies=db-policy \
ttl=20m
Preparation on the k8s side
On the k8s side, create a SecretProviderClass custom resource to retrieve secrets from Vault.
In the manifest, specify the following items in spec.parameters:
- vaultAddress: The address of the Vault server. Since the Vault server is exposed with the service name
vaultin thevaultnamespace, you can access it by specifyinghttp://vault.vault:8200. - roleName: The Kubernetes role name created on the Vault server side. In the example above, it is
database. - objects: Specify the secrets to be retrieved from the Vault side in list format.
- secretPath: The path to the target secret on the Vault server.
- secretKey: The key name to retrieve within the secret.
- objectName: The key name for storing the retrieved secret. Within the pod, the value will be written into a file with the name specified here.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: db-secret-class
spec:
provider: vault
parameters:
vaultAddress: "http://vault.vault:8200"
roleName: database
objects: |
- objectName: "obj-username"
secretPath: "secret/data/database/secret"
secretKey: "username"
- objectName: "obj-password"
secretPath: "secret/data/database/secret"
secretKey: "password"
Also, create a ServiceAccount to execute the above:
kubectl create sa db
Next, create the pod that will actually reference the secret.
To reference the secretProviderClass object created above within the pod, define it in spec.volumes. Specify the resource name db-secret-class created earlier in the secretProviderClass attribute.
volumes:
- name: vault-db-creds
csi:
driver: 'secrets-store.csi.k8s.io'
readOnly: true
volumeAttributes:
secretProviderClass: db-secret-class
To mount this to a specific container within the pod, specify spec.containers[].volumeMounts.
For name, specify the same name as volumes.name above, and for mountPath, specify the path to mount inside the container.
spec:
containers:
volumeMounts:
- name: vault-db-creds
mountPath: '/mnt/secrets-store'
readOnly: true
For this verification, we will use a busybox image. The final manifest is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: demo
spec:
selector:
matchLabels:
app: demo
replicas: 1
template:
metadata:
annotations:
labels:
app: demo
spec:
serviceAccountName: db
containers:
- name: app
image: busybox:1.29
command:
- "/bin/sleep"
- "10000"
volumeMounts:
- name: vault-db-creds
mountPath: '/mnt/secrets-store'
readOnly: true
volumes:
- name: vault-db-creds
csi:
driver: 'secrets-store.csi.k8s.io'
readOnly: true
volumeAttributes:
secretProviderClass: db-secret-class
Operation Verification
When you create the pod prepared above, authentication with the Vault server is performed at the time the container inside the pod starts, and the process from retrieving the secret to mounting it inside the pod is executed.
Starting a shell inside the pod and checking the path specified in volumeMounts.mountPath reveals that the keys within the secret specified in the SecretProviderClass are mounted as symbolic links. The file names are mapped to the names specified in objectName, and the values corresponding to each key are written into the contents of the files.
# Start a shell inside the Pod
$ kubectl exec -it app-75fbd57ff8-pz482 -- sh
/ # ls -l /mnt/secrets-store/
total 0
lrwxrwxrwx 1 root root 15 Mar 20 08:26 obj-password -> ..data/obj-password
lrwxrwxrwx 1 root root 15 Mar 20 08:26 obj-username -> ..data/obj-username
/ # cat /mnt/secrets-store/username
myusername
/ # cat /mnt/secrets-store/password
mypassword
In this way, using the Vault CSI provider allows you to reference secrets from the Vault side within a pod.
By the way, let's see if the values inside the pod change when the secret values are modified on the Vault side.
Update the values on the Vault side to myusername2 and mypassword2, respectively.
$ vault kv put -mount=secret database/secret username=myusername2 password=mypassword2
The values inside the pod remain unchanged.
/ # cat /mnt/secrets-store/username
myusername
/ # cat /mnt/secrets-store/password
mypassword
As mentioned earlier, since secrets are retrieved from the Vault server at the time of pod creation, changes to the secrets on the Vault side are not reflected in real-time, and a pod recreation is required.
After deleting the existing pod, checking the values again confirms that they have been updated to the new values.
# Delete the Pod.
$ kubectl delete pod app-75fbd57ff8-pz482
pod "app-75fbd57ff8-pz482" deleted
# A new pod starts due to the deployment, so start a shell
$ kubectl exec -it app-75fbd57ff8-z9927 -- sh
/ #
# Verify
/ # cat /mnt/secrets-store/username
myusername2
Secret Usage Status
When a secret created by a SecretProviderClass is in use by any pod, a secretproviderclasspodstatuses resource is created. This resource is dynamically created only when a SecretProviderClass resource is being referenced by a pod, and it is automatically deleted when no pods are referencing it.
For example, in the above case, since the pod app-75fbd57ff8-vxclr is referencing the db-secret-class SecretProviderClass, the following resource is created.
(The resource name seems to follow the format [pod-name]-[namespace]-[SecretProviderClass-name]).
$ kubectl get secretproviderclasspodstatuses.secrets-store.csi.x-k8s.io
NAME AGE
app-75fbd57ff8-vxclr-default-db-secret-class 89s
Looking at the details with kubectl describe allows you to see with what key names the secrets are actually mounted to the pod.
$ kubectl describe secretproviderclasspodstatuses.secrets-store.csi.x-k8s.io
...
Status:
Mounted: true
Objects:
Id: obj-password
Version: CyNGjiAyP1qYpoZcgGU-suc3FMYduH64LU-Pwy2X9MQ=
Id: obj-username
Version: uoQc8U20pD_ebH3RTbcZgmyWuDbSTQ_9MRXU3daavOM=
Pod Name: app-75fbd57ff8-vxclr
Secret Provider Class Name: db-secret-class
Target Path: /var/lib/kubelet/pods/b3193227-239d-437f-b73d-a40fc13d6a35/volumes/kubernetes.io~csi/vault-db-creds/mount
Events: <none>
When there are no more pods referencing the SecretProviderClass, the secretproviderclasspodstatuses is also automatically deleted.
$ kubectl delete deployments.apps app
deployment.apps "app" deleted
$ kubectl get secretproviderclasspodstatuses.secrets-store.csi.x-k8s.io 1 ↵
No resources found in default namespace.
Incidentally, the path shown in the Target Path above represents the path on the node where the secret is actually mounted.
Checking this path on the node where the pod is running confirms that the same secrets as those verified inside the pod are mounted.
$ pwd
/var/lib/kubelet/pods/b3193227-239d-437f-b73d-a40fc13d6a35/volumes/kubernetes.io~csi/vault-db-creds/mount
$ ls -l
total 0
lrwxrwxrwx 1 root root 19 Mar 20 08:26 obj-password -> ..data/obj-password
lrwxrwxrwx 1 root root 19 Mar 20 08:26 obj-username -> ..data/obj-username
$ cat obj-username
myusername2
Error Verification
If secret retrieval from the Vault server fails due to incorrect manifest settings or other reasons, the pod status will remain as ContainerCreating during creation.
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
app-75fbd57ff8-hm787 0/1 ContainerCreating 0 13s
You can check the error details using describe pod.
$ kubectl describe pod app-75fbd57ff8-hm787
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 99s default-scheduler Successfully assigned vault/app-75fbd57ff8-hm787 to k8s-w1
Warning FailedMount 36s (x8 over 100s) kubelet MountVolume.SetUp failed for volume "vault-db-creds" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod vault/app-75fbd57ff8-hm787, err: rpc error: code = Unknown desc = error making mount request: couldn't read secret "username": failed to login: Error making API request.
URL: POST http://vault.vault:8200/v1/auth/kubernetes/login
Code: 400. Errors:
* invalid role name "db"
In the example above, it states * invalid role name "db". This indicates that while the role name is specified as db within the SecretProviderClass object, the role name defined on the Vault server side is database, meaning login to the Vault server failed.
Similar error messages can also be found in the logs of the vault-csi-provider container within the vault-csi-provider pod.
$ kubectl logs vault-csi-provider-k92cr vault-csi-provider
2024-03-20T08:22:18.838Z [INFO] server: Finished unary gRPC call: grpc.method=/v1alpha1.CSIDriverProvider/Mount grpc.time=9.530387ms grpc.code=Unknown
err=
| error making mount request: couldn't read secret "username": failed to login: Error making API request.
|
| URL: POST http://vault.vault:8200/v1/auth/kubernetes/login
| Code: 400. Errors:
|
| * invalid role name "db"
Therefore, if things are not working correctly, checking these messages is recommended.
Vault Agent Injector
Vault Agent Injector is a mechanism where adding annotations to a pod that references a secret allows the Agent Injector to detect it at pod creation time and inject a sidecar container containing the secret into the pod (a common sidecar injection pattern).
Functionally, it is the same as the CSI Provider. Refer to the following for a comparison between the two:
Operation Verification
Refer to the following for the requirements and configuration items of the Agent Injector:
The following tutorial describes the steps for using the Agent Injector in detail:
We will observe the operation using these as a reference.
Create a policy, k8s authentication role, and SA, similar to the CSI Provider.
$ vault policy write db-policy - <<EOF
path "secret/data/database/*" {
capabilities = ["read"]
}
EOF
$ vault write auth/kubernetes/role/database \
bound_service_account_names=db \
bound_service_account_namespaces=default \
policies=db-policy \
ttl=20m
$ kubectl create sa db
To use the Agent Injector feature, add Vault Agent Injector annotations to the pod. Supported items are summarized in annotations, but at a minimum, specify the following:
| annotation | Value | Description |
|---|---|---|
| vault.hashicorp.com/agent-inject | true | If set to true, Agent injection is enabled, and an agent container is injected at pod creation time. |
| vault.hashicorp.com/role | database | Specify the k8s authentication role on the Vault side created above. |
| vault.hashicorp.com/agent-inject-secret-[arbitrary file name] | secret/database/secret | The path on the Vault side from which to retrieve the secret. |
In the following configuration, the secret stored in secret/database/secret on the Vault side will be stored in a file named db-config.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: demo
spec:
selector:
matchLabels:
app: demo
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: database
vault.hashicorp.com/agent-inject-secret-db-config: secret/database/secret
labels:
app: demo
spec:
serviceAccountName: db
containers:
- name: app
image: busybox:1.29
command:
- "/bin/sleep"
- "10000"
When the Deployment is created, the injector detects it, and an init container vault-agent-init and the following vault-agent container are injected into the pod:
vault-agent:
Container ID: containerd://8123c4c7dadea724c7ec93d0ad3d04810e1e0c90077019ade6cd055495f61bfa
Image: hashicorp/vault:1.19.0
Image ID: docker.io/hashicorp/vault@sha256:bbb7f98dc67d9ebdda1256de288df1cb9a5450990e48338043690bee3b332c90
Port: <none>
Host Port: <none>
Command:
/bin/sh
-ec
Args:
echo ${VAULT_CONFIG?} | base64 -d > /home/vault/config.json && vault agent -config=/home/vault/config.json
State: Running
Started: Thu, 01 May 2025 06:45:00 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 500m
memory: 128Mi
Requests:
cpu: 250m
memory: 64Mi
Environment:
NAMESPACE: default (v1:metadata.namespace)
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_LOG_LEVEL: info
VAULT_LOG_FORMAT: standard
VAULT_CONFIG: eyJhdXRvX2F1dGgiOnsibWV0aG9kIjp7InR5cGUiOiJrdWJlcm5ldGVzIiwibW91bnRfcGF0aCI6ImF1dGgva3ViZXJuZXRlcyIsImNvbmZpZyI6eyJyb2xlIjoiZGF0YWJhc2UiLCJ0b2tlbl9wYXRoIjoiL3Zhci9ydW4vc2VjcmV0cy9rdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3Rva2VuIn19LCJzaW5rIjpbeyJ0eXBlIjoiZmlsZSIsImNvbmZpZyI6eyJwYXRoIjoiL2hvbWUvdmF1bHQvLnZhdWx0LXRva2VuIn19XX0sImV4aXRfYWZ0ZXJfYXV0aCI6ZmFsc2UsInBpZF9maWxlIjoiL2hvbWUvdmF1bHQvLnBpZCIsInZhdWx0Ijp7ImFkZHJlc3MiOiJodHRwOi8vdmF1bHQudmF1bHQuc3ZjOjgyMDAifSwidGVtcGxhdGUiOlt7ImRlc3RpbmF0aW9uIjoiL3ZhdWx0L3NlY3JldHMvZGItY29uZmlnIiwiY29udGVudHMiOiJ7eyB3aXRoIHNlY3JldCBcInNlY3JldC9kYXRhYmFzZS9zZWNyZXRcIiB9fXt7IHJhbmdlICRrLCAkdiA6PSAuRGF0YSB9fXt7ICRrIH19OiB7eyAkdiB9fVxue3sgZW5kIH19e3sgZW5kIH19IiwibGVmdF9kZWxpbWl0ZXIiOiJ7eyIsInJpZ2h0X2RlbGltaXRlciI6In19In1dLCJ0ZW1wbGF0ZV9jb25maWciOnsiZXhpdF9vbl9yZXRyeV9mYWlsdXJlIjp0cnVlfX0=
Mounts:
/home/vault from home-sidecar (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-64hbc (ro)
/vault/secrets from vault-secrets (rw)
Inside the pod's app container, the /vault/secrets directory is automatically created, and the file db-config specified in the annotation is generated. The values contained in the secret are stored in this file in an unformatted format.
/ # cat /vault/secrets/db-config
data: map[password:mypassword username:myusername] # Stored here in key-value format
metadata: map[created_time:2025-05-01T06:11:29.838293545Z custom_metadata:<nil> deletion_time: destroyed:false version:1]
The /vault/secrets directory is dynamically created as an emptyDir volume and shared between the app container and the vault-agent container.
spec:
containers:
- command:
- /bin/sleep
- "10000"
name: app
volumeMounts:
- mountPath: /vault/secrets
name: vault-secrets
volumes:
- emptyDir:
medium: Memory
name: vault-secrets
Note that, just like the CSI driver, changes to the secret on the Vault side are not reflected in the pod's values, and pod recreation is required to apply updates.
Outputting Raw Secret Data
In the format described above, unformatted secrets are output to the file. If you wish to format them, you can use secret templates.
In templates, you specify the annotation using vault.hashicorp.com/agent-inject-template-[filename] and define the content to be written to the file using the template format. For example, in the following example, the username and password contained in secret/database/secret are output to config in a format that exports them.
spec:
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/database/secret" -}}
export username="{{ .Data.data.username }}"
export password="{{ .Data.data.password }}"
{{- end }}
When you create a pod with this setup, a config file is created under /vault/secrets, allowing you to retrieve the username and password from the secret in the specified format.
/ # cat /vault/secrets/config
export username="myusername2"
export password="mypassword2"
Applying to Environment Variables
There doesn't seem to be a way to directly map values retrieved from a secret to container environment variables, but as described, it can be handled by outputting values to a file using a template and then using source on that file in the entrypoint or args.
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/web" -}}
export api_key="{{ .Data.data.payments_api_key }}"
{{- end }}
spec:
serviceAccountName: web
containers:
- name: web
image: alpine:latest
command:
['sh', '-c']
args:
['source /vault/secrets/config && <entrypoint script>'] # source export api_key here
ports:
- containerPort: 9090
Vault Secrets Operator
The Vault Secrets Operator leverages the common Kubernetes Operator pattern to synchronize secrets stored in a Vault server with Kubernetes secrets using custom resources. For detailed information, refer to the following:
We will try it out by following the usage procedures described in the documentation above.
Setup
The Vault Secrets Operator can also be installed via Helm, but it uses a different chart, hashicorp/vault-secrets-operator, from the Vault server.
helm install --version 0.5.2 --create-namespace --namespace vault-secrets-operator vault-secrets-operator hashicorp/vault-secrets-operator
Once the installation is complete, the operator pod will start.
$ kubectl get pod -n vault-secrets-operator
NAME READY STATUS RESTARTS AGE
vault-secrets-operator-controller-manager-7d48875c77-fmftl 2/2 Running 0 30h
In the Vault Secrets Operator, the authentication to the Vault server and the secrets to be retrieved are defined using custom resources. First, create a VaultConnection custom resource that describes the destination Vault server. Since the connection destination here is the Vault service within the same cluster, you can specify the Vault service in spec.address.
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultConnection
metadata:
name: vault-connection
spec:
address: http://vault.vault.svc.cluster.local:8200
Next, create a VaultAuth custom resource to authenticate with the Vault server. As mentioned in Supported Vault authentication methods, in addition to Kubernetes authentication, the following authentication methods can be used:
- kubernetes
- JWT
- AppRole
While the documentation describes the configuration for Kubernetes authentication, we will take this opportunity to set it up to use AppRole for authentication.
Prepare the AppRole used for authentication by creating it as k8s-vault-op on the Vault server beforehand. The secret is created as KV v2 at secret/data/k8s/testdata/secret, and we will configure a policy to allow reading it.
$ cat policy.yml
path "secret/data/k8s/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Create policy for AppRole
$ vault policy write k8s-vault-op policy.yml
# Create AppRole
$ vault write auth/approle/role/k8s-vault-op policies="k8s-vault-op"
# Retrieve Role ID
$ vault read auth/approle/role/k8s-vault-op/role-id
# Retrieve Secret ID
$ vault write -f auth/approle/role/k8s-vault-op/secret-id
Next, create a VaultAuth custom resource to authenticate using the AppRole above. Referring to VaultAuthSpec in the API Reference, for AppRole, you specify VaultAuthConfigAppRole in the appRole field, and within this, you specify the roleId value and a secretRef that stores the secretId. Therefore, create a Kubernetes secret k8s-vault-op containing the Secret ID obtained above.
apiVersion: v1
kind: Secret
metadata:
name: k8s-vault-op
type: Opaque
stringData:
id: 3a145c1e-54ed-b841-2d59-f2eb1839e539 # secret ID
$ kubectl apply -f secret-id.yml
In the VaultAuth manifest, specify the roleId under appRole and set the secretRef to the k8s-vault-op mentioned above.
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
name: vault-auth
spec:
vaultConnectionRef: vault-connection
method: appRole
mount: approle
appRole:
roleId: e7a69056-729a-5448-ca6b-5c2b39035ee9
secretRef: k8s-vault-op
After creating the resource, you can confirm whether the authentication was successful by describing vaultauths.
$ kubectl describe vaultauths.secrets.hashicorp.com vault-auth
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Accepted 3s VaultAuth Successfully handled VaultAuth resource request
Now that the preparations are complete, we will finally write the KV v2 value to be used for the following verification on the Vault server side.
$ vault kv put -mount=secret k8s/testdata/secret key1=value1
========= Secret Path =========
secret/data/k8s/testdata/secret
Creating a StaticSecret
To synchronize a secret from the Vault side to the Kubernetes side, create a VaultStaticSecret custom resource. The contents specified in spec are as follows:
- vaultAuthRef: Specify the name of the
VaultAuthcustom resource used for authentication to the Vault server. - mount: Specify the mount on the Vault side. Since we specified
-mount=secretwhen creating the secret, specifysecrethere as well. - type: Specify the type on the Vault side. Since we wrote using KV V2 when creating the secret, specify
kv-v2. - path: Specify the path to the target secret on the Vault side.
- refreshAfter: The interval at which the secret is synchronized? (I couldn't quite tell from the documentation's definition).
- destination: Information regarding the secret on the Kubernetes side where the values retrieved from the Vault side are stored.
- create: If set to
true, a new secret is created if it does not exist. - name: The Kubernetes secret name.
- create: If set to
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: vault-static-secret-v2
spec:
vaultAuthRef: vault-auth
mount: secret
type: kv-v2
path: k8s/testdata/secret
refreshAfter: 60s
destination:
create: true
name: kv-secret
After creating the resource, if the secret is successfully retrieved from the Vault side, a Secret synced message will be recorded in the Events of the vaultstaticsecrets resource.
$ kubectl describe vaultstaticsecrets.secrets.hashicorp.com vault-static-secret-v2
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SecretSynced 10s VaultStaticSecret Secret synced
Normal SecretRotated 10s VaultStaticSecret Secret synced
This will create a secret with the name specified in the manifest.
$ kubectl get secret kv-secret
NAME TYPE DATA AGE
kv-secret Opaque 2 35s
We wrote the key-value key1: value1 on the Vault side, but within the secret, the keys are stored under data, and the values are written as base64 encoded strings.
$ kubectl get secret kv-secret -o yaml
apiVersion: v1
data:
_raw: eyJkYXRhIjp7ImtleTEiOiJ2YWx1ZTEifSwibWV0YWRhdGEiOnsiY3JlYXRlZF90aW1lIjoiMjAyNC0wMy0yMVQwOTo1NTo0NC4zMjgxOTIyMTFaIiwiY3VzdG9tX21ldGFkYXRhIjpudWxsLCJkZWxldGlvbl90aW1lIjoiIiwiZGVzdHJveWVkIjpmYWxzZSwidmVyc2lvbiI6MX19
key1: dmFsdWUx
kind: Secret
...
$ kubectl get secret kv-secret -o yaml | yq -r ".data.key1" | base64 -d
value1
The secret created as described above can be handled in the same way as a regular secret, and can be referenced by mounting it on the pod side, etc.
Also, let's try changing the value of the secret to value2 on the k8s side.
$ echo -n "value2" | base64
dmFsdWUy
# Write the base64 encoded value to the secret
$ kubectl edit secret kv-secret
secret/kv-secret edited
# Retrieve the value
$ kubectl get secret kv-secret -o yaml | yq -r ".data.key1" | base64 -d
value2%
Immediately after the change, the value is value2, but if you wait a bit and retrieve the value of key1 again, it returns to value1.
$ kubectl get secret kv-secret -o yaml | yq -r ".data.key1" | base64 -d
value1%
Since the secret on the Kubernetes side is periodically synchronized with the Vault server, even if the value on the Kubernetes secret side is changed, it is restored to the value on the Vault server side at the time of synchronization. When synchronization is complete, SecretRotated is output in the Events of VaultStaticSecret.
Events
Normal SecretRotated 15s (x4 over 7m51s) VaultStaticSecret Secret synced
However, this is merely a periodic synchronization from Vault to k8s, so changing the value on the k8s side does not change the value on the Vault side. Therefore, rather than synchronization, it might be more appropriate to say that the secret is periodically rotated as described above. Of course, you can confirm that the value on the Vault server side has not changed even immediately after changing the value on the k8s secret side.
# Write the base64 encoded value to the secret
$ kubectl edit secret kv-secret
secret/kv-secret edited
# Checking the secret on the Vault server side immediately after shows it's still value1.
$ vault kv get -mount=secret k8s/testdata/secret
========= Secret Path =========
secret/data/k8s/testdata/secret
==== Data ====
Key Value
--- -----
key1 value1
Secrets created with the StaticSecret custom resource are periodically rotated, preventing accidents where an unintended operation on the Kubernetes side overwrites a secret value and causes unexpected behavior.
Creating a DynamicSecret
In addition to StaticSecret, you can also create custom resources corresponding to DynamicSecret. While StaticSecret is used to reference values previously written to the Vault server, as we saw above, DynamicSecret is used to reference credentials that are dynamically created based on a request and have an expiration date. The documentation provides an example using AWS Secrets, so let's try that.
In this example, an AWS IAM user is dynamically created when the custom resource is created. Additionally, because the created IAM user is rotated based on the expiration date specified on the Vault side, it has the advantage of being difficult to exploit even if the IAM user's credentials are leaked, as they will expire after a certain amount of time.
When using AWS Secrets, you must first complete the setup for the AWS secrets engine. Also, at this time, you configure the details and policies for the IAM users created by DynamicSecret.
# Enable the AWS secrets engine
$ vault secrets enable aws
# Configure IAM credentials for accessing AWS
# access_key and secret_key should be created in advance
$ vault write aws/config/root \
access_key=... \
secret_key=... \
region=ap-northeast-1
# Configure the IAM user to be created
$ vault write aws/roles/my-role \
credential_type=iam_user \
policy_document=-<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*"
}
]
}
EOF
After setting the above, in the standard method using Vault commands, you can dynamically create an IAM user on the AWS side by executing vault read aws/creds/my-role. With the Secrets Operator, instead of executing a command, you create a VaultDynamicSecret custom resource, which creates the IAM user at the timing of the resource creation.
First, add a policy so that the AppRole we used earlier can access aws/creds/my-role.
$ cat role-policy.hcl
path "aws/creds/*" {
capabilities = ["read"]
}
$ vault policy write iam-role role-policy.hcl
$ vault write auth/approle/role/k8s-vault-op policies=k8s-vault-op,iam-role
Next, create a manifest for VaultDynamicSecret following the example in the documentation.
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
name: vault-dynamic-secret-aws-iam
spec:
vaultAuthRef: vault-auth
mount: aws
path: creds/my-role
destination:
create: true
name: dynamic-aws-iam
Creating this resource creates an IAM user on the AWS side. At the same time, a Kubernetes secret dynamic-aws-iam is created, which contains the credentials (access_key and secret_key) for the created IAM user.
$ kubectl get secret dynamic-aws-iam -o yaml | yq -r ".data._raw" | base64 -d | jq
{
"access_key": "xxx",
"secret_key": "yyy",
"security_token": null
}
Checking the AWS side, you can confirm that an IAM user named vault-approle-my-role-1711468574-iTMb25HW7bU0mfk1v4yV has been created.
$ aws iam list-users
Users:
- Arn: ....
CreateDate: ...
Path: /
UserId: AIDARTHR5BBQBSFCH6X6R
UserName: vault-approle-my-role-1711468574-iTMb25HW7bU0mfk1v4yV
Since DynamicSecret involves temporary credentials, it is common to set an expiration date using a TTL. To verify the behavior when the TTL expires, we will set the expiration date (TTL) for the above IAM user to 1 minute using the following command.
$ vault write sys/mounts/aws/tune default_lease_ttl=1m max_lease_ttl=1m
Looking at the behavior after recreating the VaultDynamicSecret resource, it seems that an operation of deleting the IAM user and creating a new one is being executed every minute. This can be confirmed by running aws iam list-users every minute and checking the CreateDate of the user. This status can also be confirmed from the events when describing the VaultDynamicSecret.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SecretSynced 2m30s VaultDynamicSecret Secret synced, lease_id="aws/creds/my-role/QQq6eSDpqOmUl2mfbV3FniBS", horizon=44.279659737s
Normal SecretRotated 2m28s VaultDynamicSecret Secret synced, lease_id="aws/creds/my-role/csc72JohsmDQVtbzIKYJsN95", horizon=43.631735595s
Normal SecretLeaseRenewal 106s VaultDynamicSecret Lease renewal duration was truncated from 60s to 18s, requesting new credentials
Normal SecretRotated 104s VaultDynamicSecret Secret synced, lease_id="aws/creds/my-role/yVwNVa8PQ105wpQrurYyFWbu", horizon=41.194170039s
Normal SecretLeaseRenewal 63s VaultDynamicSecret Lease renewal duration was truncated from 60s to 19s, requesting new credentials
Normal SecretRotated 60s VaultDynamicSecret Secret synced, lease_id="aws/creds/my-role/AvZyDu9DecRCG5RodD8P22Pt", horizon=43.546827077s
Normal SecretLeaseRenewal 17s VaultDynamicSecret Lease renewal duration was truncated from 60s to 17s, requesting new credentials
Normal SecretRotated 14s VaultDynamicSecret Secret synced, lease_id="aws/creds/my-role/bDKo5OTSlzlwr7j0SVVfx6s1", horizon=41.868678212s
One minute after a new IAM user is created, it expires and is deleted, but as long as the VaultDynamicSecret resource exists, a new user is created each time via SecretRotated. We've confirmed that even if the expiration date passes, the VaultDynamicSecret side handles it appropriately. Since the credentials are rotated every TTL, pods and other entities using these credentials can use them without specifically being aware of the TTL expiration (though you might need to be aware of it in some cases, since the IAM username and access_key values themselves change each time they are rotated).
When the VaultDynamicSecret resource is deleted, rotation no longer occurs, and the IAM user created within it will be deleted by Vault once the TTL expires, eventually returning to the state before the VaultDynamicSecret resource was created.
Auto unseal
As seen in Initialization and Unsealing, Vault server pods started in modes other than Dev start in a sealed state, requiring an unseal operation every time a pod is recreated. However, manually executing unseal every time a pod is recreated is quite inconvenient for operations, so an Auto unseal feature is provided. This is a function that automatically executes unseal using a key stored in an external cloud or similar location.
The following options for key storage locations are listed in Auto unseal:
- AWS KMS
- GCP Cloud KMS
- Azure Key Vault
- HSM (Hardware Security Module)
- Transit Secret Engine
Among these, the first three are methods using cloud provider Key Management services, while HSM is a feature exclusive to Vault Enterprise. The last one, the Transit Secret Engine, is a feature that uses Vault's own engine, providing a mechanism to store keys on a separate Vault server built independently from the target Vault server. Since this environment can be prepared locally, we will try it out.
Setup
Basically, we will implement this by following the steps in the documentation for Auto-unseal using Transit secrets engine.
The Transit Secret Engine method requires a separate Vault server to store keys, distinct from the Vault server built on the k8s cluster. While you could prepare another k8s cluster and build Vault with Helm, we will focus on simplicity here and build it using Docker on a different server.
The docker-compose.yml for the build is as follows.
services:
vault:
container_name: vault
image: hashicorp/vault
ports:
- 8200:8200
cap_add:
- IPC_LOCK
command: server -dev -dev-root-token-id="00000000-0000-0000-0000-000000000000"
environment:
VAULT_DEV_ROOT_TOKEN_ID: '00000000-0000-0000-0000-000000000000'
VAULT_TOKEN: '00000000-0000-0000-0000-000000000000'
Additionally, the documentation refers to the Vault server enabling Auto unseal as vault1 and the Vault server storing the key as vault2, but we will interpret them as follows:
- vault1: The Vault server storing the key for vault2. Built with Docker.
- vault2: The Vault server on the k8s cluster. We will use the one we have been using so far.

Quoted from https://developer.hashicorp.com/vault/tutorials/auto-unseal/autounseal-transit#scenario-introduction
After starting the Vault server with Docker, set the environment variables to connect to the container's Vault server, enable the transit engine, and create a key named autounseal. These steps are the same as in the documentation.
# Execute on the server where vault1 is running
$ export VAULT_ADDR="http://0.0.0.0:8200"
$ export VAULT_TOKEN="00000000-0000-0000-0000-000000000000"
# Enable Audit log
# The documentation sets it to output to audit.log,
# but for Docker, we set it to output to stdout.
$ vault audit enable file file_path=stdout
$ vault secrets enable transit
$ vault write -f transit/keys/autounseal
$ vault policy write autounseal -<<EOF
path "transit/encrypt/autounseal" {
capabilities = [ "update" ]
}
path "transit/decrypt/autounseal" {
capabilities = [ "update" ]
}
EOF
# Create Token
$ vault token create -orphan -policy="autounseal" \
-wrap-ttl=120 -period=24h \
-field=wrapping_token > wrapping-token.txt
# Unwrap
$ vault unwrap -field=token $(cat wrapping-token.txt)
Operation Verification
To enable auto-unseal, you must add a seal block to the Vault server configuration file. In the Vault server on the k8s cluster, the configuration is described in HCL syntax in the vault-config ConfigMap, so we will edit this.
- address: Specify the address of the Vault server running in Docker
- token: Set the value of the token unwrapped in the previous step
- disable_renewal: Set to false
- key_name: Specify the key name set in the Docker Vault
- mount_path: Specify the mount path set in the Docker Vault
- tls_skip_verify: Set to true as it is HTTP communication
seal "transit" {
address = "http://192.168.3.181:8200"
token = "hvs.CAESIFCKWyUjvdWquPuRdiLSkbxgiog0Ke6zODf5ApbdP8awGh4KHGh2cy5WWUlZQ0c5YWdsNGhxOGc1N1dHSERCeG0"
disable_renewal = "false"
key_name = "autounseal"
mount_path = "transit/"
tls_skip_verify = "true"
}
Edit the configmap to set the above values.
$ kubectl edit configmaps vault-config
After editing, delete the Vault pod to reflect the changes in the ConfigMap.
When the pod is recreated, it starts in a sealed state, but auto-unseal is not yet applied at this stage. This is because, as seen in Initialization and Unsealing, an unseal was previously performed using an unseal token, so the current seal setting is Shamir seal. Therefore, you need to migrate from Shamir seal to auto-unseal following the Seal migration procedure. The execution method varies depending on the Vault version and HA configuration, but for a StandAlone Vault like this one, the following steps should be followed:
Now, bring the standby node back up and run the unseal command on each key, by supplying the -migrate flag.
Therefore, execute kubectl exec -ti vault-0 -- vault operator unseal --migrate against the Vault server on the k8s cluster, and repeat the entry of different unseal keys three times, just as in Initialization and Unsealing.
If the input is successful, no specific message will be displayed, but auto-unseal will become active. To verify this, delete the Vault pod again and wait a moment; the pod will enter the Ready state without you having to perform an unseal operation.
Checking the logs of the Docker-side Vault server with docker log vault reveals that requests like the following have been recorded by the audit log. As stated in the documentation, an operation: update against path: transit/decrypt/autounseal is being executed, confirming that auto-unseal is being performed by this request.
"request": {
"id": "e353d0e1-57e3-b714-db1a-0bd4b12fad18",
"client_id": "CazzHB1T6UPO5F7ytGq17GnkZg/t3aiSR1ZYkz28Cxk=",
"operation": "update",
"mount_point": "transit/",
"mount_type": "transit",
"mount_accessor": "transit_292d86f0",
"mount_running_version": "v1.15.6+builtin.vault",
"mount_class": "secret",
...
"path": "transit/encrypt/autounseal",
...
"remote_address": "192.168.3.125",
As seen above, we have confirmed the operation of storing a key for auto-unseal on another Vault server and executing it using the Transit Secret Engine.
Others
About OpenBao
In 2023, following licensing changes for HashiCorp Terraform, OpenTofu, an open-source project forked from Terraform, was launched.
Similarly for Vault, a project called OpenBao, forked from Vault, is being developed.
It possesses most of the core Vault features such as server, authentication, policy, secret engine, agent, and proxy, and several features for running on k8s, such as the agent injector, have also been implemented. The support status for the features verified in this article is described in the following article.
Conclusion
HashiCorp Vault was built on a k8s cluster and various features were tested. We confirmed that Vault itself is a secret management service for managing sensitive information and has a fairly rich set of features for seamless integration with Kubernetes secrets. How to handle sensitive information (secrets) in actual operations is a common challenge, but various approaches can be considered, such as centralizing management on the Vault side and using the Vault Secrets Operator introduced this time to synchronize with Kubernetes secrets.
Discussion