roampal-core
Health Gecti
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 40 GitHub stars
Code Gecti
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
This MCP server provides persistent, outcome-based memory for AI coding assistants like Claude Code and OpenCode. It automatically promotes good advice and demotes bad advice across sessions without changing your existing workflow.
Security Assessment
The automated code scan reviewed 12 files and found no dangerous patterns, hardcoded secrets, or dangerous permission requests. The tool does capture your AI chat exchanges to score outcomes, meaning it handles your conversational data. Based on the README, it operates via local CLI hooks and plugins. Since it is a memory-scoring tool, be aware that it inherently processes and stores the context of your development sessions locally. Overall risk: Low.
Quality Assessment
The project appears healthy and actively maintained, with repository activity as recent as today. It uses the permissive Apache-2.0 license and has garnered 40 GitHub stars, indicating a solid baseline of community trust for a new tool. It includes clear documentation, CI testing badges, and straightforward setup instructions.
Verdict
Safe to use.
Outcome-based persistent memory MCP server for Claude Code and OpenCode. Good advice promoted, bad advice demoted. pip install roampal.
Roampal — Outcome-Based Persistent Memory MCP Server
Two commands. Your AI coding assistant gets outcome-based memory.
Works with Claude Code and OpenCode.
Why?
AI coding assistants forget everything between sessions. You explain your architecture, your preferences, your conventions — again. When they give bad advice, there's no mechanism to learn from it.
Roampal is an MCP server that gives your AI persistent, outcome-based memory across every session. Good advice gets promoted. Bad advice gets demoted. Your AI learns what works and what doesn't — automatically, with zero workflow changes.
Quick Start
pip install roampal
roampal init
Auto-detects installed tools. Restart your editor and start chatting.
Target a specific tool:
roampal init --claude-codeorroampal init --opencode
The core loop is identical — both platforms inject context, capture exchanges, and score outcomes. The delivery mechanism differs:
| Claude Code | OpenCode | |
|---|---|---|
| Context injection | Hooks (stdout) | Plugin (system prompt) |
| Exchange capture | Stop hook | Plugin session.idle event |
| Scoring | Main LLM via score_memories tool |
Independent sidecar (your chosen model > Zen free) |
| Self-healing | Hooks auto-restart server on failure | Plugin auto-restarts server on failure |
Claude Code prompts the main LLM to score each exchange via the score_memories tool. OpenCode never self-scores — an independent sidecar (a separate API call) reviews each exchange as a third party, removing self-assessment bias. The score_memories tool is not registered on OpenCode. During roampal init or roampal sidecar setup, Roampal detects local models (Ollama, LM Studio, etc.) and lets you choose a scoring model. If configured, these take priority (Zen is skipped for privacy). A cheap or local model works great — scoring doesn't need a powerful model. Defaults to Zen free models (remote, best-effort) if you skip setup.
How It Works
When you type a message, Roampal automatically injects relevant context before your AI sees it:
You type:
fix the auth bug
Your AI sees:
═══ KNOWN CONTEXT ═══
• JWT refresh pattern fixed auth loop [id:patterns_a1b2] (3d, 90% proven, patterns)
• User prefers: never stage git changes [id:mb_c3d4] (memory_bank)
═══ END CONTEXT ═══
fix the auth bug
No manual calls. No workflow changes. It just works.
The Loop
- You type a message
- Roampal injects relevant context automatically (hooks in Claude Code, plugin in OpenCode)
- AI responds with full awareness of your history, preferences, and what worked before
- Outcome scored — good advice gets promoted, bad advice gets demoted
- Repeat — the system gets smarter every exchange
Five Memory Collections
| Collection | Purpose | Lifetime |
|---|---|---|
working |
Current session context | 24h — promotes if useful, deleted otherwise |
history |
Past conversations | 30 days, outcome-scored |
patterns |
Proven solutions | Persistent while useful, promoted from history |
memory_bank |
Identity, preferences, goals | Permanent |
books |
Uploaded reference docs | Permanent |
Commands
roampal init # Auto-detect and configure installed tools
roampal init --claude-code # Configure Claude Code explicitly
roampal init --opencode # Configure OpenCode explicitly
roampal init --no-input # Non-interactive setup (CI/scripts)
roampal start # Start the HTTP server manually
roampal stop # Stop the HTTP server
roampal status # Check if server is running
roampal status --json # Machine-readable status (for scripting)
roampal stats # View memory statistics
roampal stats --json # Machine-readable statistics (for scripting)
roampal doctor # Diagnose installation issues
roampal summarize # Summarize long memories (retroactive cleanup)
roampal score # Score the last exchange (manual/testing)
roampal context # Output recent exchange context
roampal ingest <file> # Add documents to books collection
roampal books # List all ingested books
roampal remove <title> # Remove a book by title
roampal sidecar status # Check scoring model configuration (OpenCode)
roampal sidecar setup # Configure scoring model (OpenCode)
roampal sidecar test # Test scoring model response format (OpenCode)
roampal sidecar disable # Remove scoring model configuration (OpenCode)
MCP Tools
Your AI gets these memory tools:
| Tool | Description | Platforms |
|---|---|---|
search_memory |
Deep search across all collections | Both |
add_to_memory_bank |
Store permanent facts (identity, preferences, goals) | Both |
update_memory |
Correct or update existing memories | Both |
delete_memory |
Remove outdated info | Both |
score_memories |
Score previous exchange outcomes | Claude Code |
record_response |
Store key takeaways from significant exchanges | Both |
How scoring works: Claude Code's hooks prompt the main LLM to call
score_memoriesevery turn. OpenCode uses an independent sidecar that scores silently in the background — the model never sees a scoring prompt andscore_memoriesis not registered as a tool. If the sidecar is unavailable, a warning prompts the user to runroampal sidecar setup. Choose your scoring model duringroampal initor viaroampal sidecar setup.
What's Different?
| Without Roampal | With Roampal |
|---|---|
| Forgets everything between sessions | Remembers you, your preferences, what worked |
| You repeat context every time | Context injected automatically |
| No learning from mistakes | Outcomes tracked — bad advice gets demoted |
| No document memory | Ingest docs, searchable forever |
Benchmarks
85.8% on LoCoMo (non-adversarial, end-to-end answer accuracy) — validated on 1,986 questions across 10 conversations with dual grading.
| Result | Score |
|---|---|
| Conversational learning vs raw ingestion | +23 points (76.6% vs 53.0%, p<0.0001) |
| Architecture vs model effect | Architecture ~10x larger contributor |
| Poison resilience (1,135 adversarial memories) | -2.6 to -4.2 points only |
| TagCascade retrieval (tags-first + CE rerank) | +1.9 Hit@1 vs pure CE (p<0.0001) |
Benchmark pipeline runs on a single GPU with no cloud dependencies. Roampal itself runs on CPU — no GPU required. Full methodology, data, and evaluation scripts: roampal-labs
Paper: "Beyond Ingestion: What Conversational Memory Learning Reveals on a Corrected LoCoMo Benchmark" (Logan Teague, April 2026)
How Roampal Compares
| Feature | Roampal Core | Claude Code built-in (CLAUDE.md / auto memory) | OpenCode built-in |
|---|---|---|---|
| Learns from outcomes | Yes — bad advice demoted, good advice promoted | No | No |
| Semantic retrieval | Yes — TagCascade + cross-encoder reranking | No — files loaded in full, no search | No memory system |
| Context injection | Automatic — relevant memories per query | Full CLAUDE.md every session, auto memory on demand | None |
| Atomic fact extraction | Yes — summaries + facts, two-lane retrieval | No — saves what Claude decides is useful | No |
| Works across projects | Yes — shared memory across all projects | Per-project only (per git repo) | No memory |
| Scales with history | Yes — 5 collections, promotion/demotion/decay | CLAUDE.md unbounded, auto memory first 200 lines | No memory |
| Fully local / private | Yes — ChromaDB on your machine | Yes | Yes |
┌─────────────────────────────────────────────────────────┐
│ pip install roampal && roampal init │
│ Claude Code: hooks + MCP → ~/.claude/ │
│ OpenCode: plugin + MCP → ~/.config/opencode/ │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ HTTP Hook Server (port 27182) │
│ Auto-started on first use, self-heals on failure │
│ Manual control: roampal start / roampal stop │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ User types message │
│ → Hook/plugin calls HTTP server for context │
│ → AI sees relevant memories, responds │
│ → Exchange stored, scored (hooks or sidecar) │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Single-Writer Backend │
│ FastAPI → UnifiedMemorySystem → ChromaDB │
│ All clients share one server, isolated by session │
└─────────────────────────────────────────────────────────┘
See dev/docs/ for full technical details.
Requirements
- Python 3.10+
- One of: Claude Code or OpenCode
- Platforms: Windows, macOS, Linux (primarily developed and tested on Windows)
- RAM: ~800MB available (cross-encoder reranker + embeddings + ChromaDB)
- Disk: ~500MB for models (multilingual embedding + reranker, downloaded automatically on first use)
- CPU: Any modern x86-64 processor with AVX2 (Intel Haswell 2013+ / AMD Excavator 2015+)
- GPU: Not required — all inference runs on CPU via ONNX Runtime
Troubleshooting
Hooks not working? (Claude Code)- Restart Claude Code (hooks load on startup)
- Check HTTP server:
curl http://127.0.0.1:27182/api/health
- Verify
~/.claude.jsonhas theroampal-coreMCP entry with correct Python path - Check Claude Code output panel for MCP errors
- Make sure you ran
roampal init --opencode - Check that the server auto-started:
curl http://127.0.0.1:27182/api/health - If not, start it manually:
roampal start
This is expected. Roampal has self-healing -- if the HTTP server stops responding, it is automatically restarted and retried.
Still stuck? Ask your AI for help — it can read logs and debug Roampal issues directly.
Support
Roampal Core is completely free and open source.
- Support development: roampal.gumroad.com
- Feature ideas & feedback: Discord
- Bug reports: GitHub Issues
- Need help with AI memory? Reach out: [email protected] | LinkedIn
License
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi