OpenClaw on Kubernetes: A Practical Installation Guide
A practical guide to installing OpenClaw on Kubernetes as part of a broader AI Delivery Workflow, including storage choices, Helm configuration, and first access.
In my last post, I laid out why I’m building an AI Delivery Workflow instead of relying on a pile of disconnected AI tools. This article is the first practical step in that direction: getting OpenClaw running on Kubernetes.
For me, OpenClaw is not interesting as just another chat interface. It becomes useful when it lives inside a real delivery stack alongside GitHub, coding agents, and ArgoCD. That is why I want it running as part of real infrastructure, not sitting off to the side as a disconnected experiment.
This guide is for someone who already knows their way around a Kubernetes cluster and wants a clean, repeatable OpenClaw install. I’m assuming you already have a working cluster, basic kubectl access, Helm installed, and a general sense of how you handle storage, secrets, and service access in your own environment.
This is not a Kubernetes primer. The goal here is simpler than that: get OpenClaw running in a way that is understandable, debuggable, and easy to repeat.
What this guide covers
This guide walks through:
- getting a working provider token
- creating the Kubernetes secret
- choosing a storage path that matches your cluster
- creating a working Helm values file
- installing OpenClaw
- verifying that the deployment actually works
The goal is that someone with a Kubernetes cluster can come away with a running OpenClaw install.
Prerequisites
Before you begin, you should already have:
- a working Kubernetes cluster with
kubectlaccess - Helm 3 installed
- persistent storage available for workloads in your cluster
- a model provider you plan to use
For storage, I would treat one of these as a prerequisite:
- a default
StorageClassthat can dynamically provision a PVC for OpenClaw, or - a PV/PVC arrangement you have already prepared for this cluster
I’m intentionally not turning this into a distro-specific storage setup guide. That varies too much across managed Kubernetes, bare metal, k3s, Talos, and everything in between. This article assumes you already know how storage is handled in your environment.
I’m using GitHub Copilot as the provider example in this guide because its authentication flow is the least obvious part of the setup. If you plan to use another provider, the overall install pattern is similar, but the secret values and model configuration will change.
Step 1: Create a namespace
Create a dedicated namespace for OpenClaw:
kubectl create namespace openclawStep 2: Get a GitHub Copilot OAuth token
If you are using GitHub Copilot, this is the part most likely to trip you up.
A GitHub Personal Access Token with the copilot scope is not enough for model access here. You need an OAuth token from GitHub’s device flow.
Request a device code:
curl -s -X POST https://github.com/login/device/code \
-H "Accept: application/json" \
-d "client_id=Iv1.b507a08c87ecfe98&scope=read:user"This returns a device_code, user_code, and verification_uri.
Open https://github.com/login/device in your browser and enter the user_code.
Then poll for the access token:
curl -s -X POST https://github.com/login/oauth/access_token \
-H "Accept: application/json" \
-d "client_id=Iv1.b507a08c87ecfe98&device_code=<DEVICE_CODE>&grant_type=urn:ietf:params:oauth:grant-type:device_code"Repeat that command until the response includes an access_token beginning with ghu_.
To verify the token works:
curl -s -o /dev/null -w "%{http_code}" \
https://api.github.com/copilot_internal/v2/token \
-H "Authorization: token ghu_YOUR_TOKEN"A 200 response confirms the token is valid.
Step 3: Discover available model IDs
The model names you see in a UI do not always match the exact model IDs you need in configuration.
You can query the available models directly:
# Get a session token
COPILOT_SESSION=$(curl -s https://api.github.com/copilot_internal/v2/token \
-H "Authorization: token ghu_YOUR_TOKEN")
# Extract the API endpoint and token
API_URL=$(echo "$COPILOT_SESSION" | jq -r '.endpoints.api')
SESSION_TOKEN=$(echo "$COPILOT_SESSION" | jq -r '.token')
# List models
curl -s "$API_URL/models" \
-H "Authorization: Bearer $SESSION_TOKEN" \
-H "Copilot-Integration-Id: vscode-chat" | jq '.data[].id'When you configure OpenClaw, prefix the model ID with github-copilot/.
For example:
github-copilot/gpt-5.4Step 4: Create the Kubernetes secrets
Create one secret for your provider token:
kubectl create secret generic openclaw-provider-secret -n openclaw \
--from-literal=GITHUB_TOKEN='ghu_YOUR_OAUTH_TOKEN'Then create a second secret for the OpenClaw gateway password:
kubectl create secret generic openclaw-auth-secret -n openclaw \
--from-literal=OPENCLAW_GATEWAY_PASSWORD='YOUR_GATEWAY_PASSWORD'In this example:
GITHUB_TOKENis the GitHub Copilot OAuth token from the device flowOPENCLAW_GATEWAY_PASSWORDis the password you will use to sign in to the OpenClaw control UI
If you are using another model provider, substitute the correct provider credential in the provider secret.
Step 5: Confirm your storage path
The key storage requirement is simple: OpenClaw needs writable persistent state under /home/node/.openclaw.
Treat storage as a prerequisite here, not a side quest. The practical question is whether your cluster can give OpenClaw a writable home directory cleanly and predictably.
For chart version openclaw/openclaw 1.5.10, the detail that matters most is that /home/node/.openclaw must be mounted writable.
If you miss that, the pod can crash during init because it tries to write /home/node/.openclaw/openclaw.json onto the read-only root filesystem.
Option A: Non-persistent smoke test
Use this if you want the fastest proven path to a working install.
This was the cleanest path I validated on lernaean-dev. The trick is that disabling app-template.persistence.data by itself is not enough for this chart version. You also need to mount a writable emptyDir at both /tmp and /home/node/.openclaw.
If you choose this path, you do not need to create a PV or PVC in this step.
Option B: Persistent storage
Use this if you want OpenClaw to keep its local state between restarts.
The exact implementation depends on your cluster. If your cluster has a StorageClass with dynamic provisioning, the storage driver handles directory creation and ownership — no extra steps needed. If you are using a static hostPath PV with DirectoryOrCreate, Kubernetes creates the directory as root:root on first use. Since OpenClaw runs as uid 1000, the init-config container will fail with Permission denied and the pod will crash-loop. In that case, add the fix-permissions init container shown in the values snippet below.
First check whether your cluster already has a storage class:
kubectl get storageclassIf your cluster has a default storage class, or a storage class you want to use, Helm may be able to create the PVC for you during install.
If your cluster does not have a usable storage class, create storage first. For example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: openclaw
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
claimRef:
namespace: openclaw
name: openclaw
hostPath:
path: /var/openclaw-data
type: DirectoryOrCreateApply it:
kubectl apply -f openclaw-storage.yamlThen verify the PVC is ready:
kubectl get pvc openclaw -n openclawThe success condition is simple: the PVC should show Bound.
Two practical caveats from testing:
- this chart version expected the PVC name to be
openclawwhen persistence stayed enabled existingClaimunderapp-template.persistence.dataworks, but requires--skip-schema-validationat install time- if using a static
hostPathPV withDirectoryOrCreateand a fresh directory, include thefix-permissionsinit container in the values snippet. If your storage class or an existing directory is already owned by uid1000, you can omit it.
Step 6: Create the values file
Create a file named openclaw-values.yaml.
The goal of this values file is to keep the deployment explicit, simple, and easy to debug. Start with this shared base config for either storage path:
app-template:
configMode: merge
controllers:
main:
containers:
chromium:
enabled: false
main:
envFrom:
- secretRef:
name: openclaw-provider-secret
- secretRef:
name: openclaw-auth-secret
configMaps:
config:
enabled: true
data:
openclaw.json: |
{
"gateway": {
"port": 18789,
"mode": "local",
"auth": {
"mode": "password"
},
"controlUi": {
"dangerouslyAllowHostHeaderOriginFallback": true
}
},
"browser": {
"enabled": false
},
"agents": {
"defaults": {
"workspace": "/home/node/.openclaw/workspace",
"model": {
"primary": "github-copilot/gpt-5.4"
},
"userTimezone": "UTC",
"timeoutSeconds": 600,
"maxConcurrent": 1
},
"list": [
{
"id": "main",
"default": true,
"identity": {
"name": "OpenClaw",
"emoji": "🦞"
}
}
]
},
"session": {
"scope": "per-sender",
"store": "/home/node/.openclaw/sessions",
"reset": {
"mode": "idle",
"idleMinutes": 60
}
},
"logging": {
"level": "info",
"consoleLevel": "info",
"consoleStyle": "compact",
"redactSensitive": "tools"
},
"tools": {
"profile": "full",
"web": {
"search": {
"enabled": false
},
"fetch": {
"enabled": true
}
}
}
}If you chose the non-persistent smoke-test path, use this storage section:
persistence:
data:
enabled: false
tmp:
enabled: true
type: emptyDir
advancedMounts:
main:
init-config:
- path: /tmp
- path: /home/node/.openclaw
init-skills:
- path: /tmp
- path: /home/node/.openclaw
main:
- path: /tmp
- path: /home/node/.openclawIf you chose the persistent path, use this storage section instead:
controllers:
main:
initContainers:
fix-permissions:
image:
repository: busybox
tag: "1.36"
command:
- sh
- -c
- |
set -eu
mkdir -p /home/node/.openclaw
chown -R 1000:1000 /home/node/.openclaw
securityContext:
runAsUser: 0
runAsGroup: 0
runAsNonRoot: false
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- ALL
add:
- CHOWN
persistence:
data:
enabled: true
type: persistentVolumeClaim
accessMode: ReadWriteOnce
size: 5Gi
advancedMounts:
main:
fix-permissions:
- path: /home/node/.openclaw
init-config:
- path: /home/node/.openclaw
init-skills:
- path: /home/node/.openclaw
main:
- path: /home/node/.openclaw
tmp:
enabled: true
type: emptyDirThe tmp emptyDir is still needed even with persistent storage. OpenClaw writes temporary files outside of /home/node/.openclaw during init, so the root filesystem must have a writable /tmp even when the PVC is handling persistent state.
A few practical notes:
OPENCLAW_GATEWAY_PASSWORDcomes fromopenclaw-auth-secret- the browser sidecar is disabled here to keep the install lighter
- if you use another model provider, update both the credentials and the configured model name
- the smoke-test path is ephemeral; pod recreation will discard local state
Step 7: Install OpenClaw
Add the Helm repo:
helm repo add openclaw https://serhanekicii.github.io/openclaw-helm
helm repo updateThen install the chart:
helm install openclaw openclaw/openclaw -n openclaw \
-f openclaw-values.yaml \
--skip-schema-validationStep 8: Wait for the deployment
Wait for the rollout:
kubectl rollout status deployment openclaw -n openclaw --timeout=600sYou can also check the pods directly:
kubectl get pods -n openclawYou want to see the OpenClaw pod reach Running.
If it does not, inspect the pod and recent events:
kubectl describe pod -n openclaw <pod-name>
kubectl get events -n openclaw --sort-by=.lastTimestampStep 9: Access the control UI
The simplest way to test a fresh install is with port forwarding:
kubectl port-forward -n openclaw deployment/openclaw 18789:18789Then open:
http://localhost:18789Sign in with the password you stored in the openclaw-auth-secret secret.
You can expose OpenClaw through ingress or a load balancer later if you want always-on remote access. For the first install, I would treat that as an operational choice, not a prerequisite. Port forwarding is the fastest way to verify that everything works.
Quick troubleshooting notes
If the install does not work the first time, these are the first places I would check:
- Provider token: if you are using GitHub Copilot, make sure you used the OAuth device flow token, not a PAT.
- Model ID: query the API instead of guessing the exact model name.
- Writable home directory: for chart
1.5.10, make sure/home/node/.openclawis mounted writable. - Storage: if your PVC stays pending, check whether your cluster has a default storage class, whether you need to set one explicitly, or whether you need to create a PV first.
- Chart schema: the wrapper chart schema can reject values that look reasonable. If using
existingClaimunderapp-template.persistence.data, add--skip-schema-validationat install time. - Cold start time: initial image pulls can take a while, especially on smaller or ARM64 systems.
- Token expiry: GitHub Copilot OAuth tokens can expire and may need to be refreshed later.
Why this matters in the bigger workflow
Getting OpenClaw running is not the whole point. The point is to make it one reliable part of a larger delivery system.
In the workflow I’m building, GitHub holds the work queue, coding agents help implement changes, ArgoCD handles deployment, and OpenClaw acts as the operating layer tying those pieces together. Infrastructure only matters if it supports a repeatable way to move work from idea to production. That is the role I want OpenClaw to play.
This guide covers the first part of that path: getting OpenClaw running cleanly so the rest of the workflow has a solid base.
Conclusion
OpenClaw makes sense on Kubernetes when you want your AI operating layer to live alongside the rest of your delivery infrastructure instead of floating outside it.
Once provider credentials, storage, and chart configuration are sorted out, the deployment itself is fairly straightforward. The real friction is usually around authentication, model naming, and choosing the right storage path for your cluster.
If you are trying to move from scattered AI tooling toward a usable AI Delivery Workflow, this is where the system starts to become real. A clean OpenClaw install does not finish the job, but it gives you a solid operating base for everything that comes next.