engram-memory-community
Health Uyari
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 5 GitHub stars
Code Basarisiz
- rm -rf — Recursive force deletion command in context/templates/web_app.yaml
- process.env — Environment variable access in context/templates/web_app.yaml
- network request — Outbound network request in context/templates/web_app.yaml
Permissions Gecti
- Permissions — No dangerous permissions requested
This tool provides a three-tiered persistent memory system for AI agents using vector databases. It allows agents to store, search, recall, and forget context across sessions entirely through local, self-hosted infrastructure.
Security Assessment
Overall risk: Medium. The tool accesses environment variables to manage configurations, which is standard but requires caution regarding what is exposed. It makes outbound network requests, likely for communicating with the Qdrant vector database or local embedding models. A failed scan flag identifies a recursive force deletion command (`rm -rf`) located inside a web app template. While potentially standard for a cleanup script, this poses a localized risk if executed improperly. No hardcoded secrets were found, and it does not request broad dangerous permissions.
Quality Assessment
The project uses the standard, permissive MIT license and was updated very recently. However, it suffers from extremely low community visibility, having only 5 GitHub stars. This indicates a lack of peer review and external testing. Additionally, its compatible command-line interfaces are unlisted, which may complicate integration efforts.
Verdict
Use with caution — while it offers an active and locally hosted architecture, the low community adoption and presence of `rm -rf` within templates mean you should review the setup scripts before deploying.
The highest-scoring AI memory system ever benchmarked that isn't reliant on LLM reranking. And it's free & burns less tokens.
Three-Tiered Brain for AI agents. Self-hosted. Zero API costs.
Docs · Quickstart · Dashboard · Cloud SDKs
Engram gives your AI agent persistent memory across sessions. Store, search, recall, and forget memories using semantic embeddings — all running on your own hardware with Qdrant and FastEmbed.
One repo, two interfaces: an OpenClaw skill and a universal MCP server that works with Claude Code, Cursor, Windsurf, and VS Code.
What You Get
| Tool | What it does |
|---|---|
memory_store |
Save a memory with semantic embedding and auto-classification |
memory_search |
Semantic similarity search across all stored memories |
memory_recall |
Auto-inject relevant memories into agent context |
memory_forget |
Remove memories by ID or search match |
Categories: preference, fact, decision, entity, other — auto-detected from content.
Quick Start
1. Deploy the backend
# Requires Docker
bash scripts/setup.sh
This starts Qdrant (vector DB) and FastEmbed (local embedding model) on your machine.
2. Connect your agent
OpenClaw (replaces default memory system):
# Clone and install the plugin
git clone https://github.com/EngramMemory/engram-memory-community.git
cd engram-memory-community
bash scripts/install-plugin.sh
This installs Engram as a plugin (not a skill) and sets it as the memory backend, replacing the built-in SQLite memory with the three-tier recall engine. No API key required — runs fully local with your Qdrant and FastEmbed.
Claude Code:
claude mcp add engrammemory -- python mcp/server.py
Cursor / Windsurf / VS Code — add to .mcp.json:
{
"mcpServers": {
"engrammemory": {
"command": "python",
"args": ["mcp/server.py"]
}
}
}
3. Use it
# Store a memory
memory_store("User prefers TypeScript over JavaScript", category="preference")
# Search memories
memory_search("language preferences")
# Forget a memory
memory_forget(query="old project requirements")
Architecture
┌─────────────────┐ ┌──────────────────────────────────────────┐
│ Your Agent │ │ Three-Tier Recall Engine │
│ (OpenClaw, │───▶│ Tier 1: Hot Cache (sub-ms, decay) │
│ Claude Code, │ │ Tier 2: Hash Index (O(1) LSH lookup) │
│ Cursor, etc) │ │ Tier 3: Qdrant ANN (full vector) │
└─────────────────┘ └───────────┬──────────────────────────────┘
│
┌───────────▼───────────┐
│ FastEmbed (local) │──▶ Qdrant (local)
└───────────────────────-┘
All on your hardware. Nothing leaves your network.
Repo Structure
engram-memory-community/
├── plugin/ ← OpenClaw memory plugin (replaces default memory)
│ ├── index.js Plugin entry — registers tools + auto-recall/capture
│ ├── openclaw.plugin.json Plugin manifest (kind: "memory")
│ └── package.json
├── plugin.py ← Python skill fallback for direct tool calls
├── src/
│ └── recall/ ← Three-tier recall engine
│ ├── recall_engine.py Hot → Hash → Vector pipeline
│ ├── hot_tier.py Frequency-adjusted decay cache (sub-ms)
│ ├── multi_head_hasher.py LSH O(1) candidate retrieval
│ ├── matryoshka.py Vector slicing (768→64 dim)
│ └── models.py MemoryResult, EngramConfig
├── skills/
│ └── openclaw/ ← OpenClaw skill (SKILL.md + plugin)
├── mcp/
│ └── server.py ← MCP server (Claude Code, Cursor, Windsurf, VS Code)
├── scripts/ ← Setup + fallback scripts
│ ├── memory_store.py
│ ├── memory_search.py
│ ├── fastembed_service.py
│ └── setup.sh
├── docker/
│ └── fastembed/ ← FastEmbed container (Dockerfile + service)
├── config/
│ └── docker-compose.yml
├── docs/ ← Architecture, examples, integration guides
├── README.md
└── LICENSE
The OpenClaw skill and the MCP server both route through plugin.py, which uses the three-tier recall engine for every store and search operation.
OpenClaw Integration
Engram hooks into OpenClaw's agent lifecycle automatically:
before_agent_start— searches for memories relevant to the user's message and injects them as contextafter_agent_response— extracts important facts from the conversation and stores them
{
"plugins": {
"entries": {
"engram": {
"enabled": true,
"config": {
"qdrantUrl": "http://localhost:6333",
"embeddingUrl": "http://localhost:11435",
"autoRecall": true,
"autoCapture": true
}
}
}
}
}
Configuration
| Option | Default | Description |
|---|---|---|
qdrantUrl |
http://localhost:6333 |
Qdrant vector database URL |
embeddingUrl |
http://localhost:11435 |
FastEmbed API endpoint |
embeddingModel |
nomic-ai/nomic-embed-text-v1.5 |
Embedding model |
collection |
agent-memory |
Memory collection name |
autoRecall |
true |
Auto-inject relevant memories |
autoCapture |
true |
Auto-save important context |
maxRecallResults |
5 |
Max memories per auto-recall |
minRecallScore |
0.35 |
Minimum similarity threshold |
debug |
false |
Enable debug logging |
Connecting to Engram Cloud (Optional)
Engram runs fully local by default. When you need overflow storage, TurboQuant compression, deduplication, or analytics beyond what your local machine can handle, connect to Engram Cloud:
- Get an API key at app.engrammemory.ai (free tier, no credit card)
- Add it to your OpenClaw config:
openclaw config set "plugins.entries.engram.config.apiKey" "eng_live_YOUR_KEY" - Restart OpenClaw — the plugin automatically switches from local to cloud mode
With a key, your memories still live in your Qdrant. Engram Cloud handles embedding, deduplication, and compression in transit — nothing is stored on our servers.
Requirements
- Python 3.10+
- Docker (for Qdrant + FastEmbed)
- 4GB+ RAM
- 10GB+ storage
Data & Privacy
Engram is local-only. No data leaves your machine.
- Memory tools store and search vectors in your local Qdrant instance
- Embeddings are generated by FastEmbed running in a local Docker container
- Context system only reads
.mdfiles inside your project's.context/directory — never arbitrary project files - Auto-recall/auto-capture (when enabled) operate within the OpenClaw agent lifecycle — memories stay in your local Qdrant
- No telemetry, no phone-home, no external API calls
The Docker image engrammemory/fastembed is built from docker/fastembed/Dockerfile in this repo. You can verify or rebuild it yourself.
Engram Cloud
Need deduplication, compression, lifecycle management, multi-agent isolation, or analytics? Engram Cloud adds enterprise intelligence on top of your self-hosted storage.
Your Qdrant stays yours. Engram Cloud processes in transit and stores nothing.
SDKs:
- Python:
pip install engrammemory-ai— PyPI - Node:
npm install engrammemory-ai— npm - Dashboard | API Docs
Contributing
Found a bug? Want to add a feature? PRs welcome.
License
MIT — Use freely in personal and commercial projects.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi