fava-trails

mcp
Security Audit
Pass
Health Pass
  • License รขโ‚ฌโ€ License: Apache-2.0
  • Description รขโ‚ฌโ€ Repository has a description
  • Active repo รขโ‚ฌโ€ Last push 0 days ago
  • Community trust รขโ‚ฌโ€ 11 GitHub stars
Code Pass
  • Code scan รขโ‚ฌโ€ Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions รขโ‚ฌโ€ No dangerous permissions requested
Purpose
This tool provides a federated, version-controlled memory system for AI agents via the Model Context Protocol (MCP). It stores an agent's thoughts, decisions, and observations as markdown files with full lineage tracking, using a separate Git repository to manage the data.

Security Assessment
The automated code scan found no dangerous patterns, hardcoded secrets, or requests for excessive permissions. However, the tool relies on Jujutsu (JJ), a version control system, which means it inherently executes local shell commands to manage files and commits. Additionally, it requires an external `OPENROUTER_API_KEY` to function, meaning it makes outbound network requests to an LLM provider for its "Trust Gate" feature. Because the MCP server configuration requires you to provide this API key in plaintext via environment variables, there is a standard supply-chain risk regarding how the key is handled. Overall risk: Medium.

Quality Assessment
The project is actively maintained with a push made as recently as today. It uses the permissive Apache-2.0 license and includes automated testing badges in its documentation. While it has a relatively low community footprint (11 GitHub stars), it boasts professional features like draft isolation, crash-proof auto-snapshots, and a clean split between the stateless engine and your private data repository.

Verdict
Use with caution: the tool itself is safe and well-structured, but users should be aware it relies on executing local shell commands and transmits data externally using a provided LLM API key.
SUMMARY

๐Ÿซ›๐Ÿ‘ฃ FAVA Trails โ€” Git-native, curated memory for AI agents via MCP. Draft isolation, promotion gate, thought lifecycle hooks, memory curation protocols, supersession chains.

README.md

PyPI
License
Tests
Python
Views

FAVA Trails

Federated Agents Versioned Audit Trail โ€” VCS-backed memory for AI agents via MCP.

Every thought, decision, and observation is stored as a markdown file with YAML frontmatter, tracked in a Jujutsu (JJ) colocated git monorepo. Agents interact through MCP tools โ€” they never see VCS commands.

Why

  • Supersession tracking โ€” when an agent corrects a belief, the old version is hidden from default recall. No contradictory memories.
  • Draft isolation โ€” working thoughts stay in drafts/. Other agents only see promoted thoughts.
  • Trust Gate โ€” an LLM-based reviewer validates thoughts before they enter shared truth. Hallucinations stay contained in draft.
  • Full lineage โ€” every thought carries who wrote it, when, and why it changed.
  • Crash-proof โ€” JJ auto-snapshots. No unsaved work.
  • Engine/Fuel split โ€” this repo is the engine (stateless MCP server). Your data lives in a separate repo you control.

Install

Prerequisites

Install Jujutsu (JJ) โ€” FAVA Trails uses JJ as its VCS engine:

fava-trails install-jj

Or install manually from jj-vcs.github.io/jj.

From PyPI (recommended)

pip install fava-trails

From source (for development)

git clone https://github.com/MachineWisdomAI/fava-trails.git
cd fava-trails
uv sync

Quick Start

Set up your data repo

New data repo (from scratch):

# Create an empty repo on GitHub (or any git remote), then clone it
git clone https://github.com/YOUR-ORG/fava-trails-data.git

# Bootstrap it (creates config, .gitignore, initializes JJ)
fava-trails bootstrap fava-trails-data

Existing data repo (clone from remote):

fava-trails clone https://github.com/YOUR-ORG/fava-trails-data.git fava-trails-data

Register the MCP server

Add to your MCP client config:

  • Claude Code CLI: ~/.claude.json (top-level mcpServers key)
  • Claude Desktop: claude_desktop_config.json

If installed from PyPI:

{
  "mcpServers": {
    "fava-trails": {
      "command": "fava-trails-server",
      "env": {
        "FAVA_TRAILS_DATA_REPO": "/path/to/fava-trails-data",
        "OPENROUTER_API_KEY": "sk-or-v1-..."
      }
    }
  }
}

If installed from source:

{
  "mcpServers": {
    "fava-trails": {
      "type": "stdio",
      "command": "uv",
      "args": ["run", "--directory", "/path/to/fava-trails", "fava-trails-server"],
      "env": {
        "FAVA_TRAILS_DATA_REPO": "/path/to/fava-trails-data",
        "OPENROUTER_API_KEY": "sk-or-v1-..."
      }
    }
  }
}

For Claude Desktop on Windows (accessing WSL):

{
  "mcpServers": {
    "fava-trails": {
      "command": "wsl.exe",
      "args": [
        "-e", "bash", "-lc",
        "FAVA_TRAILS_DATA_REPO=/path/to/fava-trails-data OPENROUTER_API_KEY=sk-or-v1-... fava-trails-server"
      ]
    }
  }
}

OpenAI Codex CLI: ~/.codex/config.toml

[mcp_servers.fava-trails]
command = "fava-trails-server"

[mcp_servers.fava-trails.env]
FAVA_TRAILS_DATA_REPO = "/path/to/fava-trails-data"
OPENROUTER_API_KEY = "sk-or-v1-..."

Other MCP clients (Crush, OpenCode, etc.): check your client's MCP config docs โ€” most accept this JSON format:

{
  "mcpServers": {
    "fava-trails": {
      "type": "stdio",
      "command": "fava-trails-server",
      "env": {
        "FAVA_TRAILS_DATA_REPO": "/path/to/fava-trails-data",
        "OPENROUTER_API_KEY": "sk-or-v1-..."
      }
    }
  }
}

The Trust Gate uses LLM verification: Thoughts are reviewed before promotion to ensure they're coherent and safe. By default, FAVA Trails uses OpenRouter to access 300โ€“500+ models from 60+ providers including Anthropic, OpenAI, Google, Qwen, and others. Get a free API key at openrouter.ai/keys. The default model (google/gemini-2.5-flash) costs ~$0.001 per review. Multi-provider support via any-llm-sdk enables switching to other providers by modifying config.yaml.

Use it

Agents call MCP tools. Core workflow:

save_thought(trail_name="myorg/eng/my-project", content="My finding about X", source_type="observation")
  โ†’ creates a draft in drafts/

propose_truth(trail_name="myorg/eng/my-project", thought_id=thought_id)
  โ†’ promotes to observations/ (visible to all agents)

recall(trail_name="myorg/eng/my-project", query="X")
  โ†’ finds the promoted thought

Agents interact through MCP tools โ€” they never see VCS commands. JJ expertise is not required.

Cross-Machine Sync

FAVA Trails uses git remotes for cross-machine sync. The fava-trails bootstrap command sets push_strategy: immediate which auto-pushes after every write.

Setting up a second machine

# 1. Install FAVA Trails
pip install fava-trails

# 2. Install JJ
fava-trails install-jj

# 3. Clone the SAME data repo (handles colocated mode + bookmark tracking)
fava-trails clone https://github.com/YOUR-ORG/fava-trails-data.git fava-trails-data

# 4. Register MCP (same config as above, with local paths)

Both machines push/pull through the same git remote. Use the sync MCP tool to pull latest thoughts from other machines.

Manual push (if auto-push is off)

cd /path/to/fava-trails-data
jj bookmark set main -r @-
jj git push --bookmark main

NEVER use git push origin main after JJ colocates โ€” it misses thought commits. See AGENTS_SETUP_INSTRUCTIONS.md for the correct protocol.

Architecture

fava-trails (this repo)        fava-trails-data (your repo)
โ”œโ”€โ”€ src/fava_trails/           โ”œโ”€โ”€ config.yaml
โ”‚   โ”œโ”€โ”€ server.py  โ†โ”€โ”€ MCP โ”€โ”€โ†’โ”œโ”€โ”€ .gitignore
โ”‚   โ”œโ”€โ”€ cli.py                 โ””โ”€โ”€ trails/
โ”‚   โ”œโ”€โ”€ trail.py                   โ””โ”€โ”€ myorg/eng/project/
โ”‚   โ”œโ”€โ”€ config.py                      โ””โ”€โ”€ thoughts/
โ”‚   โ”œโ”€โ”€ trust_gate.py                      โ”œโ”€โ”€ drafts/
โ”‚   โ”œโ”€โ”€ hook_manifest.py                   โ”œโ”€โ”€ decisions/
โ”‚   โ”œโ”€โ”€ protocols/                         โ”œโ”€โ”€ observations/
โ”‚   โ”‚   โ””โ”€โ”€ secom/                         โ””โ”€โ”€ preferences/
โ”‚   โ””โ”€โ”€ vcs/
โ”‚       โ””โ”€โ”€ jj_backend.py
โ””โ”€โ”€ tests/
  • Engine (fava-trails) โ€” stateless MCP server, Apache-2.0. Install via pip install fava-trails.
  • Fuel (fava-trails-data) โ€” your organization's trail data, private.

Configuration

Environment variables:

Variable Read by Purpose Default
FAVA_TRAILS_DATA_REPO Server Root directory for trail data (monorepo root) ~/.fava-trails
FAVA_TRAILS_DIR Server Override trails directory location (absolute path) $FAVA_TRAILS_DATA_REPO/trails
FAVA_TRAILS_SCOPE_HINT Server Broad scope hint baked into tool descriptions (none)
FAVA_TRAILS_SCOPE Agent Project-specific scope from .env file (none)
OPENROUTER_API_KEY Server API key for Trust Gate LLM reviews via OpenRouter (none โ€” required for propose_truth)

LLM Provider: FAVA Trails uses any-llm-sdk for unified LLM access. OpenRouter is the default provider (recommended for simplicity โ€” single API key, 300โ€“500+ models from 60+ providers). Additional providers (Anthropic, OpenAI, Bedrock, etc.) can be configured in config.yaml for future versions.

The server reads $FAVA_TRAILS_DATA_REPO/config.yaml for global settings. Minimal config.yaml:

trails_dir: trails          # relative to FAVA_TRAILS_DATA_REPO
remote_url: null            # git remote URL (optional)
push_strategy: manual       # manual | immediate

When push_strategy: immediate, the server auto-pushes after every successful write. Push failures are non-fatal.

See AGENTS_SETUP_INSTRUCTIONS.md for full config reference including trust gate and per-trail overrides.

Protocols

FAVA Trails supports optional lifecycle protocols โ€” hook modules that run custom logic at key points in the thought lifecycle (save, promote, recall). Protocols are registered in your data repo's config.yaml and loaded at server startup.

SECOM โ€” Compression at Promote Time

Extractive token-level compression via LLMLingua-2, based on the SECOM paper (Tsinghua University and Microsoft, ICLR 2025). Thoughts are compressed once at promote time (WORM pattern), reducing storage and boosting recall density. Purely extractive โ€” only original tokens survive, no paraphrasing or rewriting.

pip install fava-trails[secom]

Add to your data repo's config.yaml:

hooks:
  - module: fava_trails.protocols.secom
    points: [before_propose, before_save, on_recall]
    order: 20
    fail_mode: open
    config:
      compression_threshold_chars: 500
      target_compress_rate: 0.6
      compression_engine:
        type: llmlingua

Structured data: SECOM's token-level compression has no notion of syntactic validity โ€” JSON objects, YAML blocks, and fenced code blocks may be silently destroyed at promote time. Tag thoughts with secom-skip to opt out:

save_thought(trail_name="my/scope", content='{"phases": [...]}', metadata={"tags": ["secom-skip"]})

The before_save hook warns when structured content is detected without secom-skip.

See protocols/secom/README.md for full config reference, model options, and the secom-skip opt-out. See AGENTS_SETUP_INSTRUCTIONS.md for the general hooks system.

Quick setup via CLI:

# Print default config (copy-paste into config.yaml)
fava-trails secom setup

# Write config directly + commit with jj
fava-trails secom setup --write

# Pre-download model to avoid first-use delay
fava-trails secom warmup

ACE โ€” Agentic Context Engineering

Playbook-driven reranking and anti-pattern detection, based on ACE (arXiv:2510.04618) (Stanford, UC Berkeley, and SambaNova, ICLR 2026). Applies multiplicative scoring using rules stored in the preferences/ namespace.

pip install fava-trails  # included in base install

Add to your data repo's config.yaml:

hooks:
  - module: fava_trails.protocols.ace
    points: [on_startup, on_recall, before_save, after_save, after_propose, after_supersede]
    order: 10
    fail_mode: open
    config:
      playbook_namespace: preferences
      telemetry_max_per_scope: 10000

Quick setup via CLI:

fava-trails ace setup           # print default config
fava-trails ace setup --write   # write + jj commit

RLM โ€” MapReduce Orchestration

Lifecycle hooks for MIT RLM (arXiv:2512.24601) MapReduce workflows. Validates mapper outputs, tracks batch progress, and sorts results deterministically for reducer consumption.

pip install fava-trails  # included in base install

Add to your data repo's config.yaml:

hooks:
  - module: fava_trails.protocols.rlm
    points: [before_save, after_save, on_recall]
    order: 15
    fail_mode: closed
    config:
      expected_mappers: 5
      min_mapper_output_chars: 20

Quick setup via CLI:

fava-trails rlm setup           # print default config
fava-trails rlm setup --write   # write + jj commit

Development

uv run pytest -v          # run tests
uv run pytest --cov       # with coverage

Docs

Contributing

See CONTRIBUTING.md for setup instructions, how to run tests, and PR expectations.

See CHANGELOG.md for release history.

Reviews (0)

No results found