m3-memory
Health Warn
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 5 GitHub stars
Code Pass
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
- Permissions — No dangerous permissions requested
This tool provides a persistent, local memory layer for AI agents. It allows agents to remember context, detect contradictions, and maintain a knowledge graph across different sessions without relying on cloud services.
Security Assessment
The tool processes and stores your conversational context and project data locally. By design, it operates fully offline using local SQLite databases and requires a local embedding model (like Ollama) to function. A light code scan of 12 files found no hardcoded secrets, no dangerous execution patterns, and no dangerous permission requests. Network activity is limited to optional local PostgreSQL syncing or connecting to your local AI model server. Overall risk: Low.
Quality Assessment
The project is very new and currently has low community visibility with only 5 GitHub stars, meaning it has not been extensively peer-reviewed. However, it is actively maintained (last updated today), uses the permissive Apache-2.0 license, and has a clear, detailed instruction manual. Developers should expect an early-stage but functional tool.
Verdict
Safe to use, though keep in mind it is an early-stage project with minimal community testing.
Local-first Agentic Memory Layer for MCP Agents • 25 tools • Hybrid search (FTS5 + vector + MMR) • GDPR • 100% local
M3 Memory
Persistent, local memory for MCP agents.
Your agent forgets everything between sessions. M3 Memory fixes that. Install it, add one line to your MCP config, and your agent remembers across sessions, detects contradictions, and keeps its own knowledge current — all on your hardware, fully offline.
Works with Claude Code, Gemini CLI, Aider, and any MCP-compatible agent.
Install
pip install m3-memory
Add to your MCP config:
{
"mcpServers": {
"memory": { "command": "mcp-memory" }
}
}
Requires a local embedding model. Ollama is the easiest:
ollama pull nomic-embed-text && ollama serve
Prefer a GUI? LM Studio works too — load any embedding model (e.g. nomic-embed-text-v1.5) and start its server (defaults to port 1234).
Restart your agent. Done.
What happens next
You're at a coffee shop on your MacBook, asking Claude to debug a deployment issue. It remembers the architecture decisions you made last week, the server configs you stored yesterday, and the troubleshooting steps that worked last time — all from local SQLite, no internet required.
Later, you're at your Windows desktop at home with Gemini CLI, and it picks up exactly where you left off. Same memories, same context, same knowledge graph. You didn't copy files, didn't export anything, didn't push to someone else's cloud. Your PostgreSQL sync handled everything in the background the moment your laptop hit the local network.
Why this exists
Most AI agents don't persist state between sessions. You re-paste context, re-explain architecture, re-correct mistakes. When facts change, the agent has no mechanism to update what it "knows."
M3 Memory gives agents a structured, persistent memory layer that handles this.
What it does
Persistent memory — facts, decisions, preferences survive across sessions. Stored in local SQLite.
Hybrid retrieval — FTS5 keyword matching + semantic vector similarity + MMR diversity re-ranking. Scored and explainable.
Contradiction handling — conflicting facts are automatically superseded. Bitemporal versioning preserves the full history.
Knowledge graph — related memories linked automatically on write. Eight relationship types, 3-hop traversal.
Local and private — embeddings generated locally. No cloud calls. No API costs. Works offline.
Cross-device sync — optional bi-directional delta sync across SQLite, PostgreSQL, and ChromaDB. Same memory on every machine.
Who this is for
| Good fit | Not the right tool |
|---|---|
| You use Claude Code, Gemini CLI, Aider, or any MCP agent | You need LangChain/CrewAI pipeline memory — see Mem0 |
| You're coordinating multiple agents on a shared local store | You need a hosted agent runtime with managed scaling — see Letta |
| You want memory that persists across sessions and devices | You only need in-session chat context |
Why trust this
| 44 MCP tools | Memory, search, GDPR — plus agent registry, handoffs, notifications, and tasks for multi-agent orchestration |
| 193 end-to-end tests | Covering write, search, contradiction, sync, GDPR, maintenance, and orchestration paths |
| Explainable retrieval | memory_suggest returns vector, BM25, and MMR scores per result |
| SQLite core | No external database required. Single-file, portable, inspectable |
| GDPR compliance | gdpr_forget (Article 17) and gdpr_export (Article 20) as built-in tools |
| Self-maintaining | Automatic decay, dedup, orphan pruning, retention enforcement |
| Apache 2.0 licensed | Free. No SaaS tier, no usage limits, no lock-in |
Core tools
Most sessions use three tools. The rest is there when you need it.
| Tool | Purpose |
|---|---|
memory_write |
Store a fact, decision, preference, config, or observation |
memory_search |
Retrieve relevant memories (hybrid search) |
memory_update |
Refine existing knowledge |
memory_suggest |
Search with full score breakdown |
memory_get |
Fetch a specific memory by ID |
All 25 tools are documented in AGENT_INSTRUCTIONS.md.
For AI agents
M3 Memory exposes 25 MCP tools for storing, searching, updating, and linking knowledge. Any MCP-compatible agent can use them automatically.
To teach your agent best practices (search before answering, write aggressively, update instead of duplicating), drop the compact rules file into your project:
examples/AGENT_RULES.md
Full tool reference with all parameters and behaviors: AGENT_INSTRUCTIONS.md
Let your agent install it
Already inside Claude Code or Gemini CLI? Paste one of these prompts:
Claude Code:
Install m3-memory for persistent memory. Run: pip install m3-memory
Then add {"mcpServers":{"memory":{"command":"mcp-memory"}}} to my
~/.claude/settings.json under "mcpServers". Make sure Ollama is running
with nomic-embed-text. Then use /mcp to verify the memory server loaded.
Gemini CLI:
Install m3-memory for persistent memory. Run: pip install m3-memory
Then add {"mcpServers":{"memory":{"command":"mcp-memory"}}} to my
~/.gemini/settings.json under "mcpServers". Make sure Ollama is running
with nomic-embed-text.
After install, test it:
Write a memory: "M3 Memory installed successfully on [today's date]"
Then search for: "M3 install"
See it in action
Contradiction detection
Hybrid search with scores
Cross-device, cross-platform sync
Learn more
- Get running → QUICKSTART.md
- Understand features → CORE_FEATURES.md
- System design → docs/ARCHITECTURE.md
- Implementation details → TECHNICAL_DETAILS.md
- Agent rules + all 25 tools → AGENT_INSTRUCTIONS.md
- M3 vs alternatives → COMPARISON.md
- Configuration → ENVIRONMENT_VARIABLES.md
- Roadmap → ROADMAP.md
Community
Contributing · Good first issues
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found