codexfi

agent
Security Audit
Warn
Health Warn
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 7 GitHub stars
Code Pass
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This tool provides persistent memory for OpenCode AI agents. It automatically extracts key facts from your coding sessions, stores them locally, and silently injects them into future sessions so the agent remembers your past decisions and project context.

Security Assessment
The tool integrates with external AI services (Anthropic, xAI, Google, and Voyage AI) to process data and generate embeddings, meaning it makes outbound network requests and handles your API keys. The README explicitly displays a `curl | bash` installation method, which always warrants caution as it executes remote code directly on your machine. However, a light code scan of 12 files found no dangerous patterns, hardcoded secrets, or requests for dangerous system permissions. Because it routes your code context to third-party AI providers and uses a piped bash installer, overall risk is rated as Medium.

Quality Assessment
The project is actively maintained, with its last code push occurring today. It uses the permissive MIT license, includes a clear description, and is written in TypeScript. However, it has very low community visibility with only 7 GitHub stars, indicating minimal external testing, auditing, or widespread community trust.

Verdict
Use with caution — the code appears safe, but the low community adoption and requirement to expose API keys to a new, untested tool mean you should review the installer script before running it.
SUMMARY

Persistent memory for OpenCode AI agents. Embedded LanceDB + Voyage AI embeddings, Bun plugin with CLI and web dashboard.

README.md

codexfi

Persistent memory for OpenCode AI agents.

The best memory system is the one that just works. No Docker. No Python. No frontend servers. Just curl -fsSL https://codexfi.com/install | bash and you're done.


npm
License: MIT
TypeScript
Bun
LanceDB
Voyage AI
Anthropic
xAI
Google
OpenCode

codexfi memory loop — conversation to memory and back

What is this?

OpenCode starts every session from scratch. No memory of past decisions, established patterns, or project context. Every session, you repeat yourself.

codexfi fixes this. After every assistant turn, key facts are automatically extracted and stored locally. On every new session, relevant memories are silently injected into context. The agent just remembers.


Install

curl -fsSL https://codexfi.com/install | bash

The installer prompts for API keys, registers the plugin, and you're done.

To verify everything is configured correctly before your first session:

bunx codexfi status

Once a session runs, proof it's working: start a second session and ask the agent about something from the first — it will know.

Prerequisites


How it works

5-step pipeline: You code → Extract → Store → Inject → Remember
  1. You code normally. No commands, no /save, nothing to learn.
  2. After every turn, the plugin extracts typed facts from the conversation and stores them locally in LanceDB.
  3. On every LLM call, a [MEMORY] block is rebuilt into the system prompt with project context, preferences, and semantically relevant memories.
  4. When you switch topics mid-session, semantic search refreshes to surface different memories.
  5. Across sessions, the agent picks up where it left off — project decisions, error fixes, your preferences, everything.

All data stays on your machine at ~/.codexfi/. Nothing is sent anywhere except the embedding and extraction APIs.


Features

codexfi Features: Local Storage, Automatic, Typed Memory, Multi-Provider, Compaction-Proof, Privacy Filter
  • Fully automatic — saves after every turn, injects on every call
  • 100% local storage — LanceDB embedded in-process
  • Multi-provider extraction — Anthropic Haiku (default), xAI Grok, or Google Gemini
  • Code-optimised embeddings — Voyage voyage-code-3
  • Typed memory systemarchitecture, error-solution, preference, progress, and more
  • Smart deduplication — cosine similarity prevents duplicates
  • Contradiction handling — new facts automatically supersede stale ones
  • Compaction-proof — memory lives in the system prompt, never lost to context truncation
  • Privacy filter — wrap content in <private>...</private> to exclude from extraction
  • Web dashboardcodexfi dashboard for live activity, costs, and memory browser
  • CLIcodexfi list, search, stats, export, forget, status

Configuration

Config at ~/.config/opencode/codexfi.jsonc:

{
  "extractionProvider": "anthropic",  // "anthropic", "xai", or "google"
  "voyageApiKey": "pa-...",
  "anthropicApiKey": "sk-ant-...",
  "similarityThreshold": 0.45,       // retrieval cutoff (0-1)
  "maxMemories": 10                  // per scope per session
}

Agent instructions (optional)

The plugin works without this, but adding instructions to ~/.config/opencode/AGENTS.md improves agent behavior — it understands the [MEMORY] block, uses the memory tool correctly, and never announces memory operations.

See plugin/README.md for the recommended AGENTS.md snippet.


Privacy

All data stays on your machine. Outbound API calls go only to Voyage AI (embeddings) and your chosen extraction provider (one per turn). Wrap sensitive content in <private>...</private> to exclude it entirely.


More

Benchmark: 94.5% Overall
  • Plugin internals — architecture, data flow, extraction providers, development setup
  • Benchmark — 94.5% on DevMemBench (200 questions, 8 categories)
  • E2E tests — 13 autonomous test scenarios

License

MIT


Built with OpenCode · LanceDB · Voyage AI · Bun

Reviews (0)

No results found