Docker, Day 12 of the Free Comprehensive OpenClaw Course
Containerize Your Agent
Why this matters
OpenClaw docker is how you go from a script running on your laptop to a real deployment that survives a reboot, isolates the agent from your host, and ships the same way every time. This lesson walks Docker Compose, sandbox modes, the security hardening defaults you should run in production, and the deployment patterns that scale across multiple agents on one host.
How do I run OpenClaw inside Docker?
OpenClaw docker is the canonical production deployment shape. Docker isolates the agent from your host filesystem, makes the deployment reproducible across machines, and gives you the sandbox the agent should be running in once it has destructive tools enabled. The full setup, end to end:
Use the official compose file, save as docker-compose.yml in your agent folder:
services:
openclaw:
image: openclaw/openclaw:2026.5.1
container_name: openclaw-agent
restart: unless-stopped
env_file: .env
volumes:
- ./workspace:/workspace
- ./logs:/logs
ports:
- "3000:3000"
mem_limit: 1g
cpus: 1.0
Then docker compose up -d and the agent runs in the background. docker compose logs -f openclaw to tail the runtime output. docker compose down to stop it.
The volume mounts give the container access to your workspace files and a logs directory but nothing else on your host. That is the whole isolation point.
A custom Dockerfile, when you need it
The official image covers most cases, but if you need to bake in custom Skills or specific Node modules, write your own Dockerfile:
# Dockerfile
FROM node:20.18-alpine
WORKDIR /app
# Install OpenClaw and pin the version
RUN npm install -g openclaw@2026.5.1
# Bake in any custom dependencies for your scripts
COPY package.json package-lock.json ./
RUN npm ci --only=production
# Copy the workspace files
COPY workspace ./workspace
COPY skills ./skills
# Run as non-root user for safety
RUN adduser -D -u 1000 openclaw
USER openclaw
EXPOSE 3000
CMD ["openclaw", "run", "--workspace", "./workspace"]
Build with docker build -t my-openclaw . and reference image: my-openclaw in your compose file. The non-root user line is the easy security win most people skip, you should not skip it.
What sandbox mode should I use in production?
OpenClaw's sandbox modes restrict what the container can read, write, and call out to. The three levels:
- strict. Read-only filesystem except for the workspace and logs volumes. No outbound network except to allowlisted hosts (your model provider, your channel hosts). This is the right default for any agent that has the exec or write tools enabled.
- standard. Read-write workspace, full outbound network. Use when the agent needs to fetch arbitrary URLs (research agents, browse-the-web Skills).
- open. No restrictions, container can read and write anywhere it can reach inside its own filesystem, full outbound network. Only for development.
Production deployments should use strict by default and only relax to standard for specific agents that need it. The mode goes in your AGENTS.md or in the compose env. The default since OpenClaw 2026.4 is strict, which is the right call.
A full compose file with volumes, limits, and logging
The minimum-viable example earlier covers the basics. The production-ready compose file adds resource limits, a healthcheck, log rotation, and a separate ollama service if you want a fully local model:
services:
openclaw:
image: openclaw/openclaw:2026.5.1
container_name: openclaw-agent
restart: unless-stopped
env_file: .env
environment:
- OPENCLAW_SANDBOX_MODE=strict
- OPENCLAW_LOG_LEVEL=info
volumes:
- ./workspace:/workspace
- ./logs:/logs
- ./skills:/skills:ro
ports:
- "127.0.0.1:3000:3000"
mem_limit: 1g
cpus: 1.0
logging:
driver: json-file
options:
max-size: "50m"
max-file: "5"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/healthz"]
interval: 60s
timeout: 5s
retries: 3
ollama:
image: ollama/ollama:latest
container_name: ollama
restart: unless-stopped
volumes:
- ollama-data:/root/.ollama
ports:
- "127.0.0.1:11434:11434"
volumes:
ollama-data:
Three production wins from this file. The 127.0.0.1: bind on the port stops the agent from being publicly reachable on the host's external IP, which prevents the most common "I accidentally exposed my agent to the internet" mistake. The log rotation prevents the disk from filling. The mem_limit prevents a single bad prompt from OOMing the host.
Do I need Kubernetes for a production OpenClaw deployment?
No. Compose is enough for almost every personal and small-team deployment. One agent or a small handful of agents on one host, with restart-unless-stopped policies and a daemon (systemd or the host's equivalent) keeping the docker daemon alive across reboots, gets you 99% uptime with a setup you can debug yourself.
Kubernetes makes sense once you are running dozens of agents across multiple hosts, or once you want rolling updates and per-agent autoscaling, or once your ops team already speaks Kubernetes. For a personal agent, or a small consulting deployment for a single client, Compose is the right answer. The day you outgrow it, the migration to a Helm chart is a few hours of work.
Pin your versions
One operational note that bites people. The Dockerfile examples in many places use openclaw/openclaw:latest, which is fine for testing and dangerous in production. :latest autoupgrades on every container restart, which means a new OpenClaw version with a breaking change can land in your production deployment without you doing anything.
Pin to a specific tag for production: openclaw/openclaw:2026.5.1. Watch the changelog (CHANGELOG.md) for releases and bump the tag deliberately when you have time to test. The same applies to your Node base image inside the Dockerfile, pin to a major.minor.
The same lesson applies to your Skills. Pin Skill versions in SKILLS.md once you have a working setup. Floating Skill versions are the second most common cause of "it worked yesterday" reports.
Common Docker pitfalls
Three failures that show up most. Volume permission errors. The container runs as a non-root user (good security), but the workspace folder on the host is owned by your user, so the container cannot write to it. Fix by setting the workspace folder's permission to match the container's user ID, usually chown -R 1000:1000 workspace logs.
The second, container exits immediately on boot. The runtime found a fatal error and exited. Run docker compose logs openclaw to see the last lines before exit. Common causes: missing required env var, malformed AGENTS.md syntax, port 3000 already in use on host.
The third, healthcheck flapping. The agent boots fine but the healthcheck times out, Docker keeps restarting the container. Usually means the runtime is taking longer than 60 seconds to be ready (large MEMORY.md, slow Ollama cold start). Bump the start_period in the healthcheck to 180s.
Bind mounts versus named volumes for the workspace
The compose examples above use bind mounts, the ./workspace:/workspace pattern. The host's ./workspace folder maps directly into the container, which means you can edit AGENTS.md or MEMORY.md on the host with your normal editor and the changes are visible inside the container immediately. This is what most people want for development and personal-agent use.
The alternative is a named Docker volume, which Docker manages and lives in /var/lib/docker/volumes. Named volumes have better performance characteristics on macOS and Windows where bind mounts go through a translation layer, and they survive container deletion the same way bind mounts do. The downside is that editing files inside a named volume requires either entering the container or running a small helper container, which is more friction.
The honest rule of thumb. Use bind mounts on Linux hosts where the agent runs in production and you want to edit workspace files with your normal editor. Use named volumes on macOS or Windows dev machines where bind-mount IO is slow enough to actually matter. For the data inside the container that does not need editing (the runtime's internal state cache, embedding indices, model weights for Ollama), always use named volumes regardless of OS.
Docker network isolation
By default, Docker creates a bridge network for each compose stack, and all services in the stack can reach each other by service name. The openclaw container can reach the ollama container at http://ollama:11434 without any extra config. This is what you want for local model setups.
What you usually do not want: the agent container reachable from any other container on the host, or from the public internet. The 127.0.0.1: bind in the port mapping handles the public side, the agent is only reachable from localhost on the host. For container-to-container, use Docker's network names to scope which services can talk to each other. A two-network setup with an "agent" network for the openclaw plus its dependencies and a separate "monitoring" network for prometheus or healthchecks is a common shape on bigger deployments.
Running multiple agents on one host
The compose pattern scales surprisingly well to a small handful of agents on one VPS. Each agent gets its own service block, its own workspace volume, its own port. A four-agent stack on a $10 a month VPS is comfortably within reach.
The shape of a multi-agent compose, abbreviated:
services:
founder-assistant:
image: openclaw/openclaw:2026.5.1
env_file: .env.founder
volumes: ["./founder-workspace:/workspace"]
ports: ["127.0.0.1:3001:3000"]
mem_limit: 1g
ops-monitor:
image: openclaw/openclaw:2026.5.1
env_file: .env.ops
volumes: ["./ops-workspace:/workspace"]
ports: ["127.0.0.1:3002:3000"]
mem_limit: 1g
research-helper:
image: openclaw/openclaw:2026.5.1
env_file: .env.research
volumes: ["./research-workspace:/workspace"]
ports: ["127.0.0.1:3003:3000"]
mem_limit: 1g
Each agent has separate workspace files, separate API keys, separate channels. The full multi-agent architecture (gateway, routing, agent-to-agent comms) lives in openclaw multi-agent on day 15, but for the simple "I want three independent agents on one host" case, the compose pattern alone is enough.
How this connects to your full agent
Docker is the foundation for the production deploy in openclaw vps deployment on day 13. The compose file from this lesson is what runs on the VPS, with systemd as the supervisor and a small monitoring stack on top. Day 14 (openclaw agentic coding) leans on Docker too, the executor agent for code changes runs inside its own sandbox to keep it from touching your real shell.
If you only do one thing from this lesson today, switch your :latest tag to a pinned version. Future you will be glad when a breaking change in 2026.6 does not silently land in your prod deployment overnight.
Key takeaways
- 01Docker isolates the agent from your host filesystem, a real safety win for autonomous deployments.
- 02Sandbox modes restrict what the container can read, write, or call out to.
- 03Compose is enough for single-host setups, you don't need Kubernetes for a personal agent.
- 04Always pin your Node and OpenClaw versions in the Dockerfile, autoupgrade is a footgun.
About the instructor. Adhiraj Hangal teaches this lesson. Founder of OpenClaw Consult and one of the few consultants whose code is merged in openclaw/openclaw core. PR #76345 was reviewed and merged by project creator Peter Steinberger. Read the contribution log.
Need help shipping openclaw docker in production?
OpenClaw Consult ships production-grade OpenClaw deployments for operators and founders. Founded by Adhiraj Hangal, a merged contributor to openclaw/openclaw core.
Hire an OpenClaw expert→