In This Article
Introduction
ClawHub offers thousands of community Skills, but eventually you'll encounter a workflow that no existing Skill handles exactly the way you need. Maybe it's a proprietary internal API. Maybe it's a niche service the community hasn't targeted yet. Maybe you want a custom automation that combines several actions in a specific sequence. Whatever the reason, building your own Skill is the natural next step for anyone who wants OpenClaw to do something specific.
The good news: Skill development is genuinely accessible. A basic Skill is 30–50 lines of JavaScript. You don't need deep programming expertise — if you can read JavaScript and understand HTTP requests, you can build a working Skill in an afternoon. This guide walks through every step, from your first file to a published ClawHub listing.
Skills are the primary extension mechanism for OpenClaw. They expose "tools" — functions the LLM can call by name with structured parameters. When you ask your agent to check the weather, search the web, or run a shell command, it's invoking a Skill. Building your own Skill means teaching your agent to do something new. The architecture is straightforward: define what the tool does, implement the handler, test it, and deploy. Here's what we're covering: each step in depth, with real examples and the patterns that separate good Skills from great ones.
Anatomy of a Skill
A Skill is a Node.js module that exports a specific object structure. At minimum, every Skill needs: a name, a description, and an array of tools. Here's the minimal structure:
// skill.js
module.exports = {
name: "my-skill",
version: "1.0.0",
description: "A brief description of what this skill does",
author: "Your Name",
tools: [
// Tool definitions go here
],
// Optional: initialization function called when skill is loaded
init: async (config) => {
// Setup code: validate config, establish connections, etc.
}
};
The name field must be unique within an OpenClaw installation. It's used as the identifier when installing or referencing the Skill. Use kebab-case: my-weather-skill, not myWeatherSkill. The description is what appears in ClawHub listings and in the agent's understanding of its capabilities. Write it clearly — the agent reads it to understand when this Skill might be useful. A vague description like "does stuff with APIs" leads to the agent either overusing or underusing your tool.
The optional init function runs once when OpenClaw loads your Skill. Use it for: validating required config, establishing database connections, or warming caches. If init throws, the Skill fails to load — useful for failing fast when configuration is wrong. Don't do heavy work in init; keep startup fast.
Skills also support a skill.json manifest file for metadata that doesn't belong in the JavaScript: ClawHub listing information, required configuration parameters, compatibility requirements, and a longer description for the marketplace. Create this alongside your skill.js:
{
"name": "my-skill",
"displayName": "My Custom Skill",
"description": "Detailed description for the ClawHub listing",
"version": "1.0.0",
"author": "Your Name",
"license": "MIT",
"requiredConfig": [
{
"key": "api_key",
"description": "Your API key for the service",
"required": true,
"sensitive": true
}
],
"tags": ["productivity", "api"],
"minOpenClawVersion": "1.5.0"
}
The requiredConfig array tells OpenClaw what configuration the user must provide. Mark sensitive fields with "sensitive": true — they won't appear in logs. The minOpenClawVersion ensures your Skill doesn't load on incompatible versions, preventing cryptic runtime errors.
Defining Tools
Each tool in the tools array represents a specific action the AI can invoke. The tool definition is what the LLM reads to understand what the tool does and how to call it. Getting this right is the most important part of Skill development.
A tool definition has three key parts: a name, a description, and a parameters object (following JSON Schema format). Here's an example for a Skill that fetches weather data:
tools: [
{
name: "get_current_weather",
description: `Fetches current weather conditions for a specified city or location.
Returns temperature, weather description, humidity, and wind speed.
Use this when the user asks about current weather anywhere in the world.
Do NOT use this for forecasts — only for current conditions.`,
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "City name and optionally country code, e.g. 'London, GB' or 'Tokyo'"
},
units: {
type: "string",
enum: ["metric", "imperial"],
description: "Temperature units. 'metric' for Celsius, 'imperial' for Fahrenheit",
default: "metric"
}
},
required: ["location"]
},
handler: async ({ location, units = "metric" }, context) => {
// Handler implementation here
}
}
]
The description field deserves special attention. Notice it's multi-line and includes: what the tool does, what it returns, when to use it, and explicitly when not to use it. This precision guides the LLM to invoke the tool at the right moments. Vague descriptions lead to the tool being called in inappropriate contexts or missed when it's most relevant. Add "Use this when..." and "Do NOT use this for..." clauses — they significantly improve tool selection accuracy.
The parameters object follows the JSON Schema specification. Define all input parameters precisely — their types, acceptable values (for enums), descriptions, and which are required. The LLM uses this schema to generate well-formed tool calls. Missing or imprecise parameter definitions are a common source of tool call failures. For optional parameters, provide sensible defaults. For enums, list every valid value — the LLM will pick from the list.
Tool names should be verb-noun: get_weather, create_task, search_documents. Avoid generic names like do_thing. The name appears in the agent's reasoning; make it descriptive.
Writing the Handler Function
The handler is the function that runs when the agent invokes your tool. It receives the parameter values the LLM generated and must return a result that the LLM can interpret and communicate to the user.
handler: async ({ location, units = "metric" }, context) => {
const apiKey = context.config.api_key;
if (!apiKey) {
throw new Error("Weather API key not configured. Please set api_key in skill config.");
}
try {
const response = await fetch(
`https://api.openweathermap.org/data/2.5/weather?q=${encodeURIComponent(location)}&units=${units}&appid=${apiKey}`
);
if (!response.ok) {
if (response.status === 404) {
return `Location "${location}" not found. Please check the city name and try again.`;
}
throw new Error(`API error: ${response.status}`);
}
const data = await response.json();
const tempUnit = units === "metric" ? "°C" : "°F";
return `Current weather in ${data.name}, ${data.sys.country}:
- Conditions: ${data.weather[0].description}
- Temperature: ${data.main.temp}${tempUnit} (feels like ${data.main.feels_like}${tempUnit})
- Humidity: ${data.main.humidity}%
- Wind: ${data.wind.speed} m/s`;
} catch (error) {
return `Failed to fetch weather for ${location}: ${error.message}`;
}
}
Key handler implementation principles:
- Always return a string. The LLM expects text it can incorporate into its response. Return structured, human-readable text rather than raw JSON objects. Use bullet points, line breaks, and clear formatting for complex data.
- Handle errors gracefully. Don't let exceptions propagate to the agent runtime. Catch errors and return a descriptive error message — the LLM can then tell the user what went wrong and suggest alternatives. "Location not found" is better than "404".
- Keep handlers focused. Each handler should do one thing well. If you find yourself building complex logic inside a handler, consider whether it should be multiple separate tools. Split complex operations into smaller, composable tools.
- Access configuration through context. Never hardcode credentials. Use
context.config.your_key_nameto access values the user configured for your Skill. Validate config at the start of the handler; fail fast with a clear message. - Sanitize inputs. If your handler passes user input to external systems (APIs, shell commands), sanitize it. Use
encodeURIComponentfor URLs. Validate formats. Never trust LLM-generated parameters blindly.
For long-running operations, consider timeouts. If your API can hang, wrap the fetch in a timeout. Return a helpful message: "The request timed out. The service may be slow. Try again in a few minutes."
Testing Your Skill Locally
Before running your Skill to a running OpenClaw instance, test it in isolation. Create a simple test file:
// test-skill.js
const skill = require('./skill.js');
// Simulate the context object
const mockContext = {
config: {
api_key: process.env.TEST_API_KEY
}
};
// Test your tool handler directly
async function runTest() {
const weatherTool = skill.tools.find(t => t.name === 'get_current_weather');
console.log("Testing with London...");
const result = await weatherTool.handler({ location: "London, GB" }, mockContext);
console.log("Result:", result);
console.log("Testing with invalid location...");
const errorResult = await weatherTool.handler({ location: "INVALIDCITY123" }, mockContext);
console.log("Error result:", errorResult);
}
runTest().catch(console.error);
Run with: TEST_API_KEY=your-key node test-skill.js
Test both happy paths and error conditions. Verify the return format is readable and informative. Check that error cases return helpful messages rather than throwing exceptions. Test with edge cases: empty strings, very long inputs, special characters. Does your handler handle them gracefully?
To test in an actual OpenClaw instance, place your Skill directory inside the ./skills directory of your OpenClaw installation and restart. The agent will automatically discover and load it. Interact via Telegram and test invocation through natural language. Try: "What's the weather in Tokyo?" and "What's the weather in asdfghjkl?" — verify both work.
For Skills that require multiple tools working together, test the full flow. Example: a CRM Skill with create_contact and add_note. Create a contact, then add a note. Does the agent chain them correctly? Does the output make sense?
Security Checklist
Before sharing or publishing a Skill, review this security checklist. These are the patterns security researchers look for when auditing Skills:
- No hardcoded credentials. All API keys, passwords, and sensitive values must come from
context.config. Never commit secrets to git. - No reading files outside intended scope. Your Skill should only access files relevant to its function. Never read from
~/.ssh,~/.aws, the OpenClaw config directory, or any other sensitive system location. - No outbound connections to unexpected domains. Your Skill should only communicate with the API or service it's designed to integrate with. Unexpected outbound connections are the primary signal of malicious behavior.
- Sanitize inputs. If your Skill executes any commands using user-provided input, sanitize that input to prevent command injection. Never pass unsanitized user input to
exec()or similar. - No access to environment variables containing other services' credentials. Your Skill only needs access to the config values explicitly defined in your
skill.json. - Log only what's necessary. Don't log sensitive values from responses or user data. API keys, tokens, and PII should never appear in logs.
- Validate rate limits. If your Skill calls external APIs, respect rate limits. Don't hammer services. Implement backoff for 429 responses.
When in doubt, apply the principle of least privilege. Your Skill should have access only to what it needs. If it doesn't need to write files, don't give it write access. If it only needs one API endpoint, don't request broader credentials.
Publishing to ClawHub
Once your Skill is tested, documented, and passes the security checklist, you can publish it to ClawHub for the community to discover and install.
- Create a public GitHub repository for your Skill with the directory structure:
skill.js,skill.json,README.md(with usage instructions), and optionally aCHANGELOG.md. - Ensure your repository has a clear
LICENSEfile. MIT License is the community standard for OpenClaw Skills. - Visit the ClawHub developer portal and submit your GitHub repository URL.
- The Foundation's review process takes 1–5 days. Reviewers check for security issues, appropriate documentation, and functional correctness.
- Once approved, your Skill appears in the catalog, installable via
openclaw skill install your-skill-name.
Write comprehensive documentation. A Skill with a clear README, example configuration, and usage examples installs significantly more than one with minimal documentation. Include: what the Skill does, what APIs or services it connects to, how to obtain required credentials, example conversations showing the agent using the Skill, and any limitations or known issues.
Add screenshots or example outputs. "Here's what the agent returns when you ask for weather in London." Show the config format. "Add this to your config.yaml." Document the requiredConfig keys. Users who can get your Skill running in 5 minutes will recommend it; users who struggle will move on.
Common Pitfalls
Tool never gets called. Usually a description problem. The LLM doesn't understand when to use your tool. Add explicit "Use this when..." and "Do NOT use for..." clauses. Make the description specific.
Tool gets called with wrong parameters. Check your parameter schema. Are types correct? Are enums complete? Is the description clear? The LLM infers parameter values from context; vague descriptions produce wrong values.
Handler throws. Catch all errors. Return error messages as strings. Never let exceptions bubble to the runtime — they crash the agent's reasoning loop.
Returning JSON instead of text. The LLM expects text. If you return a raw object, the agent may not interpret it correctly. Format: "Here are the results: [list]. Summary: [text]."
Config not found. Ensure requiredConfig in skill.json matches what you access in the handler. Use context.config.your_key — the key must match. Validate at handler start.
Wrapping Up
Building a custom OpenClaw Skill is one of the most direct ways to extend your agent's capabilities precisely to your needs. The Skills API is well-designed, the development cycle is fast, and the community will benefit from well-built, well-documented contributions. Every Skill you build for yourself is a potential contribution to the ecosystem — and the ecosystem is what makes OpenClaw one of the most capable AI agent frameworks available today.
Start with a simple Skill: one tool, one API call. Get it working. Then add complexity. The patterns in this guide scale: multi-tool Skills, Skills with state, Skills that integrate with databases. The fundamentals stay the same. See API integration for more advanced patterns.