In This Article
Introduction
A full AI agent running continuously on a $80 computer. It sounds implausible until you remember that OpenClaw is fundamentally a Node.js gateway service — lightweight software that coordinates between your messaging apps and cloud AI providers. The heavy lifting (AI inference) happens on remote servers. The Raspberry Pi just needs to run the coordination layer reliably, which it does excellently.
A Raspberry Pi 5 OpenClaw setup offers 24/7 autonomous AI agent capability at a fraction of the cost of any other deployment option. The power consumption is negligible (3–5 watts), the noise is zero, and the recurring cost is just your cloud API charges. Here's what we're covering: the complete setup.
Which Raspberry Pi to Use
Not all Raspberry Pi models are equally suitable for OpenClaw. Here's the practical guide:
Raspberry Pi 5 (8GB) — Recommended: The current flagship. The Pi 5 brings a significant performance leap over Pi 4 — roughly 2–3x faster CPU performance and much better I/O. The 8GB RAM model is important for OpenClaw: the Node.js service, its dependencies, and Skills can consume 300–500MB under load, leaving comfortable headroom in 8GB but feeling cramped in 4GB especially if you run multiple concurrent Skills.
Raspberry Pi 4 (8GB) — Good alternative: The Pi 4 8GB works well for OpenClaw with cloud models. Slower than Pi 5 but perfectly capable. Available at lower prices as Pi 5 stock increases. One caveat: avoid the 4GB Pi 4 — under heavy Skills usage, it can run low on memory and cause instability.
Raspberry Pi Zero 2 W — Not recommended: The Zero 2 W is charming but too slow and too RAM-limited (512MB) for a reliable OpenClaw getting it running. Fine as a learning exercise, problematic for production use.
Additional hardware you'll need: a quality microSD card (at minimum 32GB, A2 application performance class) or preferably a USB 3 SSD, an official Raspberry Pi power supply (the 5V/5A supply for Pi 5), and a case with cooling if running continuously.
OS & Node.js Setup
Ubuntu Server 22.04 LTS is the recommended OS for OpenClaw on Raspberry Pi. Raspberry Pi OS works but has more moving parts. Ubuntu Server provides a lean, stable base with excellent Node.js support:
# Flash Ubuntu Server 22.04 LTS (arm64) to your microSD using Raspberry Pi Imager
# Boot, then SSH in
# Update the system
sudo apt update && sudo apt upgrade -y
# Install Node.js 20 via NodeSource repository
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs
# Verify
node --version # v20.x.x
npm --version # 10.x.x
# Install git
sudo apt install -y git
# Optional but recommended: install tmux for terminal session management
sudo apt install -y tmux
One important Pi-specific optimization: move your swap file to reduce SD card wear if running on an SD card (better: use an SSD). The default swapfile location on the SD card will wear it out faster under continuous use:
# Increase swap and move to better location if using SSD
sudo dphys-swapfile swapoff
sudo nano /etc/dphys-swapfile # Set CONF_SWAPFILE=/path/to/ssd/swapfile, CONF_SWAPSIZE=2048
sudo dphys-swapfile setup
sudo dphys-swapfile swapon
Installing OpenClaw on Pi
The installation process on Pi is identical to any Linux system:
# Create a directory for OpenClaw
mkdir ~/agents && cd ~/agents
# Clone the repository
git clone https://github.com/openclaw-foundation/openclaw.git
cd openclaw
# Install dependencies
npm install
# Create your config from the template
cp config.example.yaml config.yaml
# Edit your config
nano config.yaml
Configure your LLM provider (you'll be using a cloud provider — see next section) and your messaging channel. Telegram is recommended for Pi deployments due to its simple webhook setup.
Test that everything works:
npm start
If OpenClaw starts and your Telegram bot responds, the base installation is complete.
For automatic startup on boot, use systemd (preferred on Ubuntu over launchd which is macOS-specific):
# Create a systemd service file
sudo nano /etc/systemd/system/openclaw.service
[Unit]
Description=OpenClaw AI Agent
After=network.target
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/ubuntu/agents/openclaw
ExecStart=/usr/bin/node src/index.js
Restart=always
RestartSec=10
Environment=NODE_ENV=production
EnvironmentFile=/home/ubuntu/agents/openclaw/.env
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable openclaw
sudo systemctl start openclaw
sudo systemctl status openclaw # Should show "active (running)"
Performance & Limitations
With cloud models, Pi 5 performance is excellent for OpenClaw's use cases. The Node.js gateway service is not CPU-intensive — it's primarily network I/O (sending requests to the AI API and receiving responses). API response latency dominates total response time, not Pi processing time. A typical interactive message takes 2–5 seconds, most of which is waiting for the cloud API.
Where the Pi feels the limitation: memory-intensive Skills that load large datasets or run complex processing locally. If you're analyzing large CSV files or running extensive data processing through shell commands, the Pi 4's slower CPU and the overhead of running many concurrent processes can make it sluggish.
For standard use cases — heartbeat monitoring, conversational AI, email management, web browsing Skills, calendar management — the Pi 5 is a completely capable platform that you'll never feel waiting on.
Why You'll Use Cloud Models
The Raspberry Pi cannot run useful LLMs locally. Even the Pi 5 with 8GB RAM, running a tiny 1–2B parameter model, produces responses that are too slow (0.5–2 tokens per second) and too low-quality for practical agent tasks. This is not a criticism of the Pi — running 7B+ parameter models requires GPU acceleration or large amounts of fast memory, neither of which the Pi provides.
The practical implication: Pi OpenClaw deployments are always cloud-model deployments. Your API key is in the config, your conversations pass through OpenAI or Anthropic servers. This means:
- No true offline operation (requires internet for every inference)
- Ongoing API costs
- AI providers see your prompts
For users who need offline capability or data privacy guarantees, the Pi is not the right platform — a Mac Mini or a PC with a capable GPU running Ollama is the better choice. For users who just want an always-on, cheap AI agent running cloud models, the Pi is perfect.
Keeping It Running 24/7
A few Pi-specific reliability optimizations for continuous operation:
Use a UPS (Uninterruptible Power Supply): Power interruptions can corrupt the SD card filesystem. A basic UPS (available for $20–30) prevents this for the Pi's 5W power draw.
Enable automatic reboots for memory leaks: Node.js processes can accumulate memory over weeks. Configure a weekly reboot during off-hours using systemd timer or cron: 0 4 * * 0 /sbin/reboot.
Monitor the Pi remotely: Set up a heartbeat task that also monitors the Pi's temperature and disk usage. vcgencmd measure_temp reports the CPU temperature — alert if above 80°C. The Pi 5 throttles at high temperatures; a case with adequate cooling prevents this.
Set up remote SSH access: Use Tailscale or ZeroTier for secure remote access to your Pi from anywhere, enabling you to troubleshoot, update, or restart the service without physical access.
Wrapping Up
A Raspberry Pi 5 running OpenClaw with cloud models is an extraordinary value proposition: a capable, 24/7 AI agent for an $80–120 hardware investment with negligible electricity costs. It handles every standard OpenClaw use case reliably, it's silent, it's small, and it runs continuously without intervention. For users who want to explore OpenClaw's capabilities with minimal commitment, or who just need a permanently available AI assistant without the expense of a Mac Mini or VPS, the Pi is the ideal starting point.