iTranslated by AI
OpenBao's vault-k8s Compatibility Status
In a previous article about running HashiCorp Vault on k8s, I touched upon OpenBao briefly, but as of February 2024, the project page and other resources were not yet in place. Looking back recently, I found that the documentation has become quite substantial, and there are now descriptions regarding its operation on Kubernetes. Therefore, I will examine whether it possesses features compatible with HashiCorp Vault on k8s.
About OpenBao
OpenBao, similar to HashiCorp Vault, is a Secret Management solution for centrally managing various sensitive information, such as credentials used for services or arbitrary key-values. Just as OpenTofu was forked following the Terraform license change, this project was forked from Vault v1.14.8–v1.14.9 and is currently being developed under the Linux Foundation.
Compatibility with HashiCorp Vault k8s
OpenBao includes almost all core Vault features such as server, authentication, policy, secret engine, agent, and proxy. Deploying to a k8s cluster using a Helm chart is also officially supported.
Given this, I am interested in whether various features I explored in my previous article, such as the Vault Secret Operator, are also provided in OpenBao, and whether they are compatible enough to replace Vault k8s with OpenBao. Therefore, I will check the compatibility of the following features explored in the previous article:
- helm install
- vault CLI
- CSI provider
- Agent injector
- Secret Operator
helm install + unseal
OpenBao provides a helm chart, making it easy to deploy onto a k8s cluster. Similar to Vault, you can specify one of the following four installation modes:
- dev
- standalone (default)
- HA
- External
To install in standalone mode:
$ helm repo add openbao https://openbao.github.io/openbao-helm
$ helm install openbao openbao/openbao -n openbao --create-namespace
In standalone mode, an OpenBao pod, which corresponds to the Vault pod, will start.
$ k get pod -n openbao
NAME READY STATUS RESTARTS AGE
openbao-0 1/1 Running 0 7h26m
Since the OpenBao pod starts in a sealed state, you must perform initialization and unsealing, just as with Vault. You can use the bao operator init command, which simply replaces vault with bao.
$ kubectl exec -ti openbao-0 -- bao operator init
It is also the same in that the unseal keys and root token are displayed upon execution.
Unseal Key 1: Ay4AU20/cJEn/qNT0Ck0ElJP1lb6zwsDIeAxDgZVJUxg
Unseal Key 2: 8SrHxIPBYvhKQlkBqqCfdKV1Dv1iySwye4B0wiJpaN7F
Unseal Key 3: TDHpWjUCh7/Pvw8quK9CgTTd2Av1lrKmrRYBrpbQKfXP
Unseal Key 4: i9stE9tBE1uUtkF+1lQpFoSf5RitZ+Y2q5F59kf5GrWp
Unseal Key 5: DejmwWHObiLAEIicAAoi4bPDbgOHEPzNezfbdz9ASJW5
Initial Root Token: s.gMDUU4tQ9mi19y7rmK2FKJmE
Unsealing is completed by executing bao operator unseal three times.
kubectl exec -ti openbao-0 -- bao operator unseal Ay4AU20/cJEn/qNT0Ck0ElJP1lb6zwsDIeAxDgZVJUxg
kubectl exec -ti openbao-0 -- bao operator unseal 8SrHxIPBYvhKQlkBqqCfdKV1Dv1iySwye4B0wiJpaN7F
kubectl exec -ti openbao-0 -- bao operator unseal TDHpWjUCh7/Pvw8quK9CgTTd2Av1lrKmrRYBrpbQKfXP
Completion of unsealing can be verified in the pod logs.
2025-04-30T08:49:57.620Z [INFO] core: post-unseal setup complete
2025-04-30T08:49:57.620Z [INFO] core: vault is unsealed
OpenBao CLI
Just like the original Vault CLI, the OpenBao CLI allows you to manage OpenBao configurations and read/write secrets via commands.
You can install the version matching your OS and architecture from the GitHub releases as bao-[version]-[os]....
Similar to the original, you specify the OpenBao address (service Cluster-IP) in the VAULT_ADDR environment variable and the token value in VAULT_TOKEN.
export VAULT_ADDR="http://10.111.206.117:8200"
export VAULT_TOKEN="s.gMDUU4tQ9mi19y7rmK2FKJmE"
This enables successful connection and authentication to the OpenBao server, allowing you to execute various commands such as reading and writing secrets.
$ bao secrets enable -path=tmp kv-v2
Success! Enabled the kv-v2 secrets engine at: tmp/
Incidentally, auto-completion settings do not seem to be available yet.
$ bao
Usage: bao <command> [args]
Common commands:
read Read data and retrieves secrets
write Write data, configuration, and secrets
delete Delete secrets and configuration
list List data or secrets
login Authenticate locally
agent Start an OpenBao agent
server Start an OpenBao server
status Print seal and HA status
unwrap Unwrap a wrapped secret
Other commands:
audit Interact with audit devices
auth Interact with auth methods
debug Runs the debug command
kv Interact with OpenBao's Key-Value storage
lease Interact with leases
monitor Stream log messages from an OpenBao server
namespace Interact with namespaces
operator Perform operator-specific tasks
patch Patch data, configuration, and secrets
path-help Retrieve API help for paths
pki Interact with OpenBao's PKI Secrets Engine
plugin Interact with OpenBao plugins and catalog
policy Interact with policies
print Prints runtime configurations
proxy Start an OpenBao Proxy
scan Scan (recursively list) data or secrets
secrets Interact with secrets engines
ssh Initiate an SSH session
token Interact with tokens
transit Interact with OpenBao's Transit Secrets Engine
version-history Prints the version history of the target Vault server
OpenBao CSI Provider
The equivalent of the Vault CSI provider in OpenBao is not mentioned in the documentation, and while there are some mentions in GitHub issues, it seems that implementation and formal maintenance have not yet been established.
- https://github.com/openbao/openbao/issues/421
- https://github.com/openbao/openbao-csi-provider/issues/5
- https://github.com/openbao/openbao/issues/40
Development is ongoing in the following repository, but the last update was 10 months ago, and it does not appear to be actively developed.
Functionality itself is included in the Helm chart values. By specifying --set csi.enabled=true during helm install, CSI provider-related resources are installed as well.
However, the container image used for the CSI provider pod is docker.io/hashicorp/vault-csi-provider, which is the same as the original Vault CSI provider.
# secrets-store-csi-driver-provider-vault
csi:
# -- True if you want to install a secrets-store-csi-driver-provider-vault daemonset.
#
# Requires installing the secrets-store-csi-driver separately, see:
# https://github.com/kubernetes-sigs/secrets-store-csi-driver#install-the-secrets-store-csi-driver
#
# With the driver and provider installed, you can mount OpenBao secrets into volumes
# similar to the OpenBao Agent injector, and you can also sync those secrets into
# Kubernetes secrets.
enabled: false
image:
# -- image registry to use for csi image
registry: "docker.io"
# -- image repo to use for csi image
repository: "hashicorp/vault-csi-provider"
# -- image tag to use for csi image
tag: "1.4.0"
# -- image pull policy to use for csi image. if tag is "latest", set to "Always"
pullPolicy: IfNotPresent
I verified functionalities such as mounting secrets into pods in the same way as the Vault CSI Provider verification. It worked without any issues because it uses the same container image as vault-csi-provider.
OpenBao Agent Injector
The component corresponding to Vault Agent is OpenBao Agent, which seems to be actively developed as its documentation is quite comprehensive. While there doesn't seem to be a dedicated page for the Agent injector functionality specifically for k8s, it is compatible with the Vault Agent injector and can be used in the same way.
To enable the OpenBao Agent injector, specify --set agent.enabled=true during helm install. However, it is enabled by default, so it is active even without explicit specification.
To inject secrets into a pod using the agent, you need to perform tasks such as creating secrets and enabling authentication on the OpenBao side. For these, you can use the same commands as in the previous article by simply replacing vault with bao.
# Enable the kv v2 secret engine
$ bao secrets enable -path=secret kv-v2
# Create a test secret
$ bao kv put -mount=secret database/secret username=myusername password=mypassword
# Enable Kubernetes authentication
$ bao auth enable kubernetes
# Start a shell in the openbao pod
$ kubectl exec -it openbao-0 -- sh
# Set environment variables to connect to the openbao server
/ $ export VAULT_ADDR="http://openbao.openbao:8200"
/ $ export VAULT_TOKEN="[root_token_value]"
# Set the k8s host for Kubernetes authentication
/ $ bao write auth/kubernetes/config kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
# Configure policy
$ bao policy write db-policy - <<EOF
path "secret/data/database/*" {
capabilities = ["read"]
}
EOF
# Create a Kubernetes authentication role
$ bao write auth/kubernetes/role/database \
bound_service_account_names=db \
bound_service_account_namespaces=default \
policies=db-policy \
ttl=20m
# Create ServiceAccount
$ k create sa db
In the pod, specify values in the annotations using the same format as the Vault Agent injector. Even with OpenBao, it seems to work with the annotation name vault.hashicorp.com.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: demo
spec:
selector:
matchLabels:
app: demo
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/role: database
vault.hashicorp.com/agent-inject-secret-db-config: secret/database/secret
labels:
app: demo
spec:
serviceAccountName: db
containers:
- name: app
image: busybox:1.29
command:
- "/bin/sleep"
- "10000"
As a result, an init container vault-agent-init and a vault-agent container are injected into the pod upon creation, and the secret is retrieved.
Looking at the images, the openbao-agent-injector pod, which manages agent injection, uses the same container image as the original Vault Agent injector: docker.io/hashicorp/vault-k8s.
agent-injector
Containers:
sidecar-injector:
Container ID: containerd://4b77e25ad1c43cd622077c061571797345591169c8c3365a15927ad93eb7ac51
Image: docker.io/hashicorp/vault-k8s:1.4.2
Image ID: docker.io/hashicorp/vault-k8s@sha256:690647d935f9bb17b4e9d1eb75d10b1b23cfd63d98ca1c456e88ae1429d6c656
Port: <none>
Host Port: <none>
Args:
agent-inject
2>&1
State: Running
Started: Thu, 01 May 2025 07:18:42 +0000
Ready: True
Restart Count: 0
Liveness: http-get https://:8080/health/ready delay=5s timeout=5s period=2s #success=1 #failure=2
Readiness: http-get https://:8080/health/ready delay=5s timeout=5s period=2s #success=1 #failure=2
Startup: http-get https://:8080/health/ready delay=5s timeout=5s period=5s #success=1 #failure=12
Environment:
AGENT_INJECT_LISTEN: :8080
AGENT_INJECT_LOG_LEVEL: info
AGENT_INJECT_VAULT_ADDR: http://openbao.openbao.svc:8200
AGENT_INJECT_VAULT_AUTH_PATH: auth/kubernetes
AGENT_INJECT_VAULT_IMAGE: quay.io/openbao/openbao:2.2.0
AGENT_INJECT_TLS_AUTO: openbao-agent-injector-cfg
AGENT_INJECT_TLS_AUTO_HOSTS: openbao-agent-injector-svc,openbao-agent-injector-svc.openbao,openbao-agent-injector-svc.openbao.svc
AGENT_INJECT_LOG_FORMAT: standard
AGENT_INJECT_REVOKE_ON_SHUTDOWN: false
AGENT_INJECT_CPU_REQUEST: 250m
AGENT_INJECT_CPU_LIMIT: 500m
AGENT_INJECT_MEM_REQUEST: 64Mi
AGENT_INJECT_MEM_LIMIT: 128Mi
AGENT_INJECT_DEFAULT_TEMPLATE: map
AGENT_INJECT_TEMPLATE_CONFIG_EXIT_ON_RETRY_FAILURE: true
POD_NAME: openbao-agent-injector-6fb589ddf9-k7vrx (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zfxkk (ro)
On the other hand, the vault-agent container injected into the pod uses OpenBao's specific image quay.io/openbao/openbao.
vault-agent
Init Containers:
vault-agent-init:
Container ID: containerd://9df6c1886bb7b1bc74454b2082edf108f864bd2ff21751a666309eb4a433c235
Image: quay.io/openbao/openbao:2.2.0
Image ID: quay.io/openbao/openbao@sha256:19612d67a4a95d05a7b77c6ebc6c2ac5dac67a8712d8df2e4c31ad28bee7edaa
Port: <none>
Host Port: <none>
Command:
/bin/sh
-ec
Args:
echo ${VAULT_CONFIG?} | base64 -d > /home/vault/config.json && vault agent -config=/home/vault/config.json
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 01 May 2025 07:26:43 +0000
Finished: Thu, 01 May 2025 07:26:43 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 500m
memory: 128Mi
Requests:
cpu: 250m
memory: 64Mi
Environment:
NAMESPACE: default (v1:metadata.namespace)
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_LOG_LEVEL: info
VAULT_LOG_FORMAT: standard
VAULT_CONFIG: eyJhdXRvX2F1dGgiOnsibWV0aG9kIjp7InR5cGUiOiJrdWJlcm5ldGVzIiwibW91bnRfcGF0aCI6ImF1dGgva3ViZXJuZXRlcyIsImNvbmZpZyI6eyJyb2xlIjoiZGF0YWJhc2UiLCJ0b2tlbl9wYXRoIjoiL3Zhci9ydW4vc2VjcmV0cy9rdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3Rva2VuIn19LCJzaW5rIjpbeyJ0eXBlIjoiZmlsZSIsImNvbmZpZyI6eyJwYXRoIjoiL2hvbWUvdmF1bHQvLnZhdWx0LXRva2VuIn19XX0sImV4aXRfYWZ0ZXJfYXV0aCI6dHJ1ZSwicGlkX2ZpbGUiOiIvaG9tZS92YXVsdC8ucGlkIiwidmF1bHQiOnsiYWRkcmVzcyI6Imh0dHA6Ly9vcGVuYmFvLm9wZW5iYW8uc3ZjOjgyMDAifSwidGVtcGxhdGUiOlt7ImRlc3RpbmF0aW9uIjoiL3ZhdWx0L3NlY3JldHMvZGItY29uZmlnIiwiY29udGVudHMiOiJ7eyB3aXRoIHNlY3JldCBcInNlY3JldC9kYXRhYmFzZS9zZWNyZXRcIiB9fXt7IHJhbmdlICRrLCAkdiA6PSAuRGF0YSB9fXt7ICRrIH19OiB7eyAkdiB9fVxue3sgZW5kIH19e3sgZW5kIH19IiwibGVmdF9kZWxpbWl0ZXIiOiJ7eyIsInJpZ2h0X2RlbGltaXRlciI6In19In1dLCJ0ZW1wbGF0ZV9jb25maWciOnsiZXhpdF9vbl9yZXRyeV9mYWlsdXJlIjp0cnVlfX0=
Mounts:
/home/vault from home-init (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fc6vp (ro)
/vault/secrets from vault-secrets (rw)
Containers:
app:
Container ID: containerd://cbbf6923b0462257d50c5104bfcc623dd67bd6e5e26364382309b80542b6ec25
Image: busybox:1.29
Image ID: docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796
Port: <none>
Host Port: <none>
Command:
/bin/sleep
10000
State: Running
Started: Thu, 01 May 2025 07:26:43 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fc6vp (ro)
/vault/secrets from vault-secrets (rw)
vault-agent:
Container ID: containerd://4ae04b8562725c656b9b1a0b99cad75725e74159047dedc9b0683330312b588e
Image: quay.io/openbao/openbao:2.2.0
Image ID: quay.io/openbao/openbao@sha256:19612d67a4a95d05a7b77c6ebc6c2ac5dac67a8712d8df2e4c31ad28bee7edaa
Port: <none>
Host Port: <none>
Command:
/bin/sh
-ec
Args:
echo ${VAULT_CONFIG?} | base64 -d > /home/vault/config.json && vault agent -config=/home/vault/config.json
State: Running
Started: Thu, 01 May 2025 07:26:43 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 500m
memory: 128Mi
Requests:
cpu: 250m
memory: 64Mi
Environment:
NAMESPACE: default (v1:metadata.namespace)
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_LOG_LEVEL: info
VAULT_LOG_FORMAT: standard
VAULT_CONFIG: eyJhdXRvX2F1dGgiOnsibWV0aG9kIjp7InR5cGUiOiJrdWJlcm5ldGVzIiwibW91bnRfcGF0aCI6ImF1dGgva3ViZXJuZXRlcyIsImNvbmZpZyI6eyJyb2xlIjoiZGF0YWJhc2UiLCJ0b2tlbl9wYXRoIjoiL3Zhci9ydW4vc2VjcmV0cy9rdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3Rva2VuIn19LCJzaW5rIjpbeyJ0eXBlIjoiZmlsZSIsImNvbmZpZyI6eyJwYXRoIjoiL2hvbWUvdmF1bHQvLnZhdWx0LXRva2VuIn19XX0sImV4aXRfYWZ0ZXJfYXV0aCI6ZmFsc2UsInBpZF9maWxlIjoiL2hvbWUvdmF1bHQvLnBpZCIsInZhdWx0Ijp7ImFkZHJlc3MiOiJodHRwOi8vb3BlbmJhby5vcGVuYmFvLnN2Yzo4MjAwIn0sInRlbXBsYXRlIjpbeyJkZXN0aW5hdGlvbiI6Ii92YXVsdC9zZWNyZXRzL2RiLWNvbmZpZyIsImNvbnRlbnRzIjoie3sgd2l0aCBzZWNyZXQgXCJzZWNyZXQvZGF0YWJhc2Uvc2VjcmV0XCIgfX17eyByYW5nZSAkaywgJHYgOj0gLkRhdGEgfX17eyAkayB9fToge3sgJHYgfX1cbnt7IGVuZCB9fXt7IGVuZCB9fSIsImxlZnRfZGVsaW1pdGVyIjoie3siLCJyaWdodF9kZWxpbWl0ZXIiOiJ9fSJ9XSwidGVtcGxhdGVfY29uZmlnIjp7ImV4aXRfb25fcmV0cnlfZmFpbHVyZSI6dHJ1ZX19
Mounts:
/home/vault from home-sidecar (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fc6vp (ro)
/vault/secrets from vault-secrets (rw)
In the app container, a file containing the secret is created under /vault/secrets, just as with the Vault Agent injector.
$ k exec -it app-79956f485b-mpbw6 -- cat /vault/secrets/db-config
Defaulted container "app" out of: app, vault-agent, vault-agent-init (init)
data: map[password:mypassword username:myusername] # Corresponds to the secret
metadata: map[created_time:2025-05-01T07:23:17.938169909Z custom_metadata:<nil> deletion_time: destroyed:false version:1]
OpenBao Secrets Operator
The component corresponding to the Vault Secrets Operator is a fork of the vault-secrets-operator repository, available below:
However, this was simply forked as part of the effort for the following issue, and with the last update being two years ago, it remains largely unmaintained.
Others
Auto unseal
Vault auto unseal, which automates unsealing at startup, is also implemented in OpenBao.
webUI
Vault has a web UI, and OpenBao also includes UI functionality.
It is enabled by default in Helm and can be accessed via port 8200 of the openbao service.
k port-forward -n openbao services/openbao 8200:8200 --address 0.0.0.0
On the login screen, you can log in using method: token and token: [root token value].
Basically, it is similar to the Vault UI, allowing you to view secrets, authentication methods, policies, and more.

Terraform Provider
In Terraform, you can manage Vault configurations and other resources on the Terraform side by using the Vault provider.
While OpenBao currently does not have its own dedicated provider, it is compatible with the Vault provider, so you can still create secrets and other resources for OpenBao.
For example, in the following main.tf which enables kv-v2 and creates a secret, you can specify the OpenBao address and token in the provider "vault" block, and the rest of the configuration will work without any changes.
provider "vault" {
address = "http://10.111.206.117:8200"
token = "s.gMDUU4tQ9mi19y7rmK2FKJmE"
}
resource "vault_mount" "kvv2" {
path = "kvv2"
type = "kv"
options = { version = "2" }
description = "KV Version 2 secret engine mount"
}
resource "vault_kv_secret_v2" "example" {
mount = vault_mount.kvv2.path
name = "secret"
cas = 1
delete_all_versions = true
data_json = jsonencode(
{
zip = "zap",
foo = "bar"
}
)
custom_metadata {
max_versions = 5
data = {
foo = "vault@example.com",
bar = "12345"
}
}
}
It works with Terraform, but since we're at it, let's use OpenTofu to apply it.
$ tofu init
$ tofu plan
$ tofu apply -auto-approve
Plan: 2 to add, 0 to change, 0 to destroy.
vault_mount.kvv2: Creating...
vault_mount.kvv2: Creation complete after 0s [id=kvv2]
vault_kv_secret_v2.example: Creating...
vault_kv_secret_v2.example: Creation complete after 0s [id=kvv2/data/secret]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
You can verify with the bao CLI that the foo and zip secret key-values have actually been written.
$ bao kv get -mount=kvv2 secret
== Secret Path ==
kvv2/data/secret
======= Metadata =======
Key Value
--- -----
created_time 2025-05-01T13:56:44.6182995Z
custom_metadata map[bar:12345 foo:vault@example.com]
deletion_time n/a
destroyed false
version 1
=== Data ===
Key Value
--- -----
foo bar
zip zap
Summary of Support Status
| Feature Name in HashiCorp Vault | Feature Name in OpenBao | GitHub Repo | Supported | Comment |
|---|---|---|---|---|
| helm | - | openbao-helm | ✅ | |
| vault CLI | bao | openbao | ✅ | |
| Vault CSI provider | OpenBao CSI Provider | openbao-csi-provider | ❌ | Last updated 10 months ago *1 |
| Vault Agent injector | OpenBao Agent injector | openbao | ✅ | The agent-injector uses the HashiCorp image. The agent itself uses an OpenBao-specific image. |
| Vault Secrets Operator | OpenBao Secrets Operator | openbao-secrets-operator | ❌ | Last updated 2 years ago *1 |
*1: While not unusable, the container images used are from HashiCorp, so it is essentially the same as using the original Vault CSI provider, etc.
Conclusion
While the Vault CSI provider and Vault Secret Operator have been forked from their respective GitHub repositories, they are not yet officially supported in OpenBao. On the other hand, the Agent injector works almost exactly like the Vault Agent injector, so if you primarily use the Vault Agent injector for pod secret management, it should be safe to migrate to OpenBao.
Compared to OpenTofu, a Terraform fork in a similar position, OpenBao has lower visibility and fewer GitHub stars (though this might be because the number of Vault users is lower than Terraform users in general). However, given that commits are made continuously at a reasonable frequency and that it is positioned as a Linux Foundation At Large Stage Project[1], we can expect further expansion in the future.
-
An "At Large" project is an open-source initiative that the TAC (Technical Advisory Committee) believes is important, or has the potential to become important, to the overall edge ecosystem. These are typically early-stage projects that, in exchange for community support, aim to add new functionality across the LF Edge open edge platform. ↩︎
Discussion