Your agent has a voice. Now it needs a brain. Memory is what makes your agent get better over time — learning your preferences, accumulating context, and recalling the right information at the right moment.
Long-term memory. Durable facts, preferences, and decisions. Injected into context every turn in private/DM sessions. This is the file your agent always sees.
Daily notes. Running context and observations, one file per day. Not injected into context — accessed on demand via memory_search and memory_get tools.
Experimental, optional. Dream diary and dreaming sweep summaries for human review. Part of the automated promotion system that moves daily notes into long-term memory.
MEMORY.md costs tokens every turn because it’s injected into context. Daily notes cost nothing until the agent explicitly retrieves them via search. This is why keeping MEMORY.md lean matters — everything in it hits every interaction.
Injected into the system prompt on every turn, but gated to private/DM sessions only — not loaded in shared or group contexts. Today and yesterday’s daily notes are also read on session start as one-shot startup context.
Daily memory files are not part of the normal bootstrap context. They’re accessed on demand via memory_search and memory_get tools, so they don’t count against the context window unless the model explicitly reads them.
The docs make this explicit: “The model only ‘remembers’ what gets saved to disk — there is no hidden state.” If it’s not in a file, it’s not in memory.
An optional read-only sub-agent that runs before the main reply for eligible sessions. It uses memory_search and memory_get to retrieve relevant context, returning either summaries or NONE.
MEMORY.md is a single file on disk, shared across your agent. DM sessions share one session by default. Use session.identityLinks to link the same person across multiple channels so they share one session.
Memory search combines vector similarity (semantic meaning) with BM25 keyword matching (exact terms like IDs and code symbols). Default weights:
If you have an OpenAI, Gemini, Voyage, or Mistral key configured, memory search is enabled automatically. No manual setup needed.
SQLite-based. Works out of the box with keyword search, vector similarity, and hybrid search. No extra dependencies. Index stored at ~/.openclaw/memory/{agentId}.sqlite
memory_search — Hybrid search across all memory files. Returns relevant chunks.memory_get — Retrieve a specific memory file or section by path.Both tools come from the active memory plugin (default: memory-core).
Tell the agent “Remember that I prefer TypeScript” and it writes to the appropriate file using its standard file-writing capability. Memory writes are triggered by your requests, not by a dedicated memory-write tool.
Before compaction summarizes your conversation, OpenClaw runs a silent turn that reminds the agent to save important context to memory files. This prevents context loss during compaction.
Memory flush writes to daily files (memory/YYYY-MM-DD.md), not directly to MEMORY.md. Long-term promotion to MEMORY.md is handled by the dreaming system.
Dreaming is how daily notes get promoted into long-term memory. It’s a multi-phase scoring system that evaluates what’s worth keeping permanently.
Stages candidates from daily files. Never writes to MEMORY.md — only identifies what might be worth promoting.
Ranks candidates using weighted scoring and threshold gates. Requires minScore, minRecallCount, and minUniqueQueries to pass. Writes exclusively to MEMORY.md.
| Relevance | 0.30 |
| Frequency | 0.24 |
| Query diversity | 0.15 |
| Recency | 0.15 |
| Consolidation | 0.10 |
| Conceptual richness | 0.06 |
The flow: daily notes → light phase stages candidates → deep phase scores and gates → promoted entries land in MEMORY.md. DREAMS.md holds the sweep summaries for your review.
“Keep injected files concise — especially MEMORY.md, which can grow over time and lead to unexpectedly high context usage and more frequent compaction.”
MEMORY.md is injected every turn. Unlike daily files which are on-demand and free until retrieved, every character in MEMORY.md costs tokens on every interaction. A bloated MEMORY.md triggers more frequent compaction, which means more memory flush cycles, which means more API calls.
Per-file cap: 20,000 chars (bootstrapMaxChars). Total bootstrap cap: 150,000 chars (bootstrapTotalMaxChars). Beyond these, content is truncated.
openclaw memory status to check index health. Use /context list in chat to see what’s consuming your context window.Run openclaw memory status to see your index provider, file count, and search readiness. If you have an OpenAI or Gemini key configured, hybrid search should be enabled automatically.
Tell your agent three preferences: “Remember that I prefer TypeScript,” “Remember that I take meetings in Eastern Time,” etc. Then in a new session, ask about those preferences and verify the agent recalls them.
Run openclaw memory search "your query" to see what comes back. Try a semantic query and a keyword query to feel the difference between vector and BM25 matching.
Open ~/.openclaw/workspace/MEMORY.md and review what’s there. Remove anything stale, consolidate duplicates, and check token usage with /context list in chat.
Your agent has a personality, it has memory, and now it needs an operating manual. AGENTS.md is where you define how your agent works — tool usage policies, workflow rules, escalation procedures, and the behavioral guardrails that make autonomy safe. On Day 9 we write the playbook.