Install & First Run, Day 2 of the Free Comprehensive OpenClaw Course
Zero to Running Agent in 10 Minutes
Why this matters
Installing openclaw is the moment most people drop off. The official docs assume you already speak Node.js. This lesson walks the install end to end on macOS, Linux, and Windows, with the exact commands, the exact .env keys, and the gotchas that only show up the first time you run the agent. By the end you will have an agent talking back to you in a real channel.
How do I install OpenClaw on macOS, Linux, or Windows?
Install openclaw is the moment most people drop off, so this lesson walks the install end to end on every supported OS. The runtime is a Node.js application, which means the prerequisite is Node 20 or newer. Check yours with node --version. If you do not have Node, install it from nodejs.org first.
On macOS and Linux the install is one line:
npx -y openclaw init my-agent
That creates a folder called my-agent with the workspace files (AGENTS.md, HEARTBEAT.md, SOUL.md, MEMORY.md) and a starter .env. cd into the folder.
On Windows the same command works inside PowerShell or Windows Terminal. WSL2 also works and is honestly the easiest path on Windows, you get a real Linux environment without leaving Windows. If you can run WSL2, do.
If npx fails for you, install OpenClaw globally instead with npm install -g openclaw and then run openclaw init my-agent.
What API keys do I need before the first run?
OpenClaw needs one API key to start: a model provider key. The simplest first run uses Anthropic Claude. Get an API key from console.anthropic.com, fund the account with $5 (which goes a very long way for a personal agent), and paste the key into the .env file in your agent folder:
ANTHROPIC_API_KEY=sk-ant-...
If you would rather start with OpenAI, the same pattern works with OPENAI_API_KEY=... instead. Or for a fully local agent against Llama 3, install Ollama from ollama.com, pull a model with ollama pull llama3, and set OLLAMA_BASE_URL=http://localhost:11434 in the .env. The full provider walkthrough is in day 3, openclaw with ollama and other providers.
Channel keys come second. The fastest channel to wire is Telegram, three minutes from BotFather to first reply. Day 5 walks the full openclaw channels matrix.
Why did my first openclaw run fail, and how do I fix it?
The three most common first-run failures, in the order they bite people:
Wrong Node version. If you see syntax errors or "unexpected token" messages on startup, you are on Node 18 or older. The fix is upgrade Node, on macOS with brew upgrade node, on Linux follow your distro's instructions, on Windows reinstall from nodejs.org or via nvm-windows. The cleanest path on every OS is to install nvm and run nvm install 20, that gives you a per-shell Node version manager you can switch as the runtime evolves.
.env file in the wrong place. The agent loads .env from its own folder, not from your home directory. If the API key seems to be there but the agent claims it is not, you are probably running openclaw run from the wrong directory. cd into the agent folder first. Verify with pwd on Linux or macOS, or cd with no arguments on Windows.
Key has no funds. Anthropic and OpenAI both create unfunded accounts by default. The first message will return a 402 or "insufficient_quota" error. Add $5 to the account and the agent runs. This is also the most common production gotcha to watch for, an unfunded fallback provider can silently break a multi-provider deployment.
If you hit any of these, the runtime prints a usable error message in the terminal. Read it. The error almost always names the file or the env key the agent could not find.
Port conflicts, permissions, and other first-run gotchas
Beyond the big three, four more failures show up regularly. Port 3000 already in use. The runtime defaults to port 3000 for the WebChat surface. If you run another Node app on 3000 (Next.js dev servers love this port), the runtime fails to bind. Fix with OPENCLAW_PORT=3100 in your .env. Or stop whatever is squatting on 3000 with lsof -i :3000 to find the PID and kill to free it.
File permission errors on Linux. If you installed with sudo npm install -g, the global node_modules folder is owned by root and the runtime cannot write its cache files. Reinstall without sudo, or fix the existing install with sudo chown -R $(whoami) ~/.npm. Better long-term answer is nvm, which never touches root.
SSL or certificate errors during npx. Usually a corporate firewall doing TLS interception. The fix is to set NODE_EXTRA_CA_CERTS to your corp CA bundle path, or run the install from a network that does not intercept (a personal hotspot works in a pinch). Never set NODE_TLS_REJECT_UNAUTHORIZED=0, that is a real security hole.
Antivirus blocking the install on Windows. Windows Defender and some third-party AV products quarantine random binaries inside node_modules. The symptom is a partial install that errors mid-step. Add an exclusion for your agent folder before reinstalling. This is more common than people admit.
WSL2 and Raspberry Pi notes
Two extra environments worth specific guidance. On WSL2, install Node inside the WSL2 Linux side, not on the Windows side. Mixing the two creates path issues that show up as cryptic "module not found" errors hours later. Open a WSL2 shell, run sudo apt update, install Node, and do the entire install openclaw flow from inside WSL2. The agent's WebChat surface is reachable from your Windows browser at localhost:3000 the same way it would be on native Linux.
On Raspberry Pi, openclaw runs comfortably on a Pi 4 with 4 GB of RAM or better. The Pi 5 is a noticeable upgrade, and some users run a full personal agent on a Pi 5 sitting on their desk. The catch on Pi: do not try to run Ollama on the same Pi, the model is too big for the RAM. Wire to a cloud model and use the Pi as the agent host only. The whole setup pulls maybe 5 watts at idle, which is wonderful for a 24/7 deployment.
Sample .env shape for a starter agent:
ANTHROPIC_API_KEY=sk-ant-...
OPENCLAW_PORT=3000
OPENCLAW_LOG_LEVEL=info
OPENCLAW_WORKSPACE=./workspace
Sending the first message
Once the agent starts, it prints a URL for the WebChat surface, usually http://localhost:3000. Open it. Type "hi". The agent should reply, in whatever default voice the starter SOUL.md gives it, within a few seconds.
Seeing that first reply is the moment you know the install is good. The agent's first reply will sound generic and corporate. That is not a bug, that is the model's default voice with a starter SOUL.md that has not been opinionated yet. Openclaw soul.md on day 7 fixes this. For now, the goal is to confirm the install is good and the model is reachable.
From there, day 5 walks the channel setup so the agent can reach you on Telegram, WhatsApp, Discord, iMessage, Slack. By tomorrow you will be talking to your agent from your phone, not from a terminal.
Verifying the install is healthy
Once the agent is replying, run a quick health pass. openclaw status shows the runtime version, the loaded workspace files, and the current provider. The output is the cheapest way to confirm the agent is reading the right files. If SOUL.md is missing from the list, you are running from the wrong directory or the workspace path is wrong.
Next, send a test that exercises the model: "Give me a one-sentence summary of why the sky is blue." If the reply is generic and correct, the model wire is good. If the reply is "I cannot help with that" or hangs, the API key is invalid or the provider is down. Check the runtime log for the actual error string.
Finally, if you wired Telegram or another channel in the .env, send a message there too. Channels load after the model, so a working WebChat reply but a silent Telegram usually means a bad bot token or a misconfigured webhook. The runtime log names the channel and the failure if there is one.
Do this verification pass every time you change the .env or move the agent to a new machine. Three minutes of checks beats an hour of "why is the agent not replying" debugging two days later when you forgot what you changed.
How this connects to your full agent
This is the foundation lesson. Every later lesson assumes a working install. The next three days are the model and cost layer. Openclaw with ollama on day 3 walks every provider you can wire to. Openclaw cost optimization on day 4 covers how to keep the bill predictable. Openclaw channels on day 5 is where the agent actually shows up in your life beyond the WebChat surface.
If the install fails in a way this lesson does not cover, the runtime's GitHub issues at github.com/openclaw/openclaw/issues is the right next stop. Search the error string before opening a new one. Almost every install error has been seen and answered by someone before.
The compounding payoff of a clean install is hard to overstate. Every later day in the course adds a small piece, a memory file, a security gate, a channel, a Skill. They all assume the foundation is solid. A broken install you patched with workarounds will bite you in week three when the workspace contract relies on the runtime behaving exactly as documented. Fix the install properly today, the rest of the course gets easier.
Key takeaways
- 01OpenClaw needs Node.js 20 or newer and a single .env file with one API key to start.
- 02The one-line installer works on macOS and Linux, Windows uses npm directly.
- 03Telegram is the fastest channel to wire up first, three minutes from BotFather to first reply.
- 04Most install errors trace back to Node version or to a .env file in the wrong directory.
About the instructor. Adhiraj Hangal teaches this lesson. Founder of OpenClaw Consult and one of the few consultants whose code is merged in openclaw/openclaw core. PR #76345 was reviewed and merged by project creator Peter Steinberger. Read the contribution log.
Need help shipping install openclaw in production?
OpenClaw Consult ships production-grade OpenClaw deployments for operators and founders. Founded by Adhiraj Hangal, a merged contributor to openclaw/openclaw core.
Hire an OpenClaw expert→