context-engine
Health Uyari
- No license — Repository has no license file
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 37 GitHub stars
Code Basarisiz
- fs module — File system access in .github/workflows/review_diff.yml
- eval() — Dynamic code execution via eval() in .review-invariants.yml
Permissions Gecti
- Permissions — No dangerous permissions requested
Security Assessment: Overall risk is Medium. Because the server's primary function is codebase indexing, it inherently reads and accesses your local file system. The automated scan flagged a critical failure for dynamic code execution via `eval()` inside a GitHub workflow file (`.review-invariants.yml`). While this might just be an artifact of automated CI testing, `eval()` is a dangerous function that can allow arbitrary code execution. The scan also noted file system access in a separate workflow. On a positive note, the server does not request overtly dangerous permissions, and there are no hardcoded secrets exposed in the scanned files.
Quality Assessment: The project is active and well-maintained, with repository updates pushed as recently as today. It has decent community traction for a niche tool, currently sitting at 37 GitHub stars. However, it lacks a license file. This is a significant drawback for enterprise or open-source use, as it means all rights are technically reserved by the author and you do not have legal permission to use, modify, or distribute the code.
Verdict: Use with caution due to the missing license and the presence of `eval()` in the repository.
Local-native Context Engine MCP server for codebase indexing, semantic retrieval, planning, and review workflows.
Context Engine MCP Server
A local-first, agent-agnostic Model Context Protocol (MCP) server for workspace indexing, retrieval, planning, and review workflows, with a setup path that works well for Codex and other OpenAI-powered agents.
New here? Start with the beginner quick start below.
If you want client-specific setup help, see docs/MCP_CLIENT_SETUP.md.
If you are on Windows, see docs/WINDOWS_DEPLOYMENT_GUIDE.md.
Historical docs live in docs/archive/INDEX.md if you need the old planning and migration notes.
OpenAI / Codex Showcase
If you want to see what this project demonstrates for OpenAI-style agent workflows, start here:
- Local workspace indexing and retrieval
- Review and planning workflows layered on top of the same MCP server
- Beginner-friendly install and client setup
- Windows support and copy-paste setup examples
- AI-agent-friendly instructions for self-setup
Why This Matters
- It shows how an OpenAI-powered agent can connect to a real workspace and start using tools right away.
- It combines retrieval, review, and planning in one MCP server instead of relying on one-off scripts.
- It gives both humans and AI agents a simple, repeatable setup path, which makes demos and onboarding easier.
Fastest demo path:
npm install
npm run build
codex mcp add context-engine -- node dist/index.js
Then in Codex, confirm the tools are visible and try:
use semantic_search to find authentication logic
Beginner Quick Start
If you just want to get Context Engine running locally, follow these steps:
- Install Node.js 18+.
- Clone this repository and open it in a terminal at the repo root.
- Install dependencies:
npm install
- Build the server:
npm run build
- Run the verification checks:
npm run verify
- Start the MCP server:
node dist/index.js
By default, Context Engine now resolves the workspace like this:
- explicit
--workspacewins - otherwise it uses the current folder
- if you launched from a nested folder inside a git repo, it falls back to the nearest git root
- if no git root exists, it stays on the current folder and logs a warning
On first run, if the index is missing or stale, startup can kick off background indexing automatically. The server still starts first, but the first query may be slower until indexing finishes.
Connect It To Your MCP Client
The server speaks MCP over stdio, so most clients can launch it with the same command.
First-Time Setup vs Daily Use
Use this mental model:
- First-time setup:
Register the MCP server once in your client. - Daily use:
Open any repo and let the server resolve the workspace automatically. - Override only when needed:
Pass--workspace <absolute-path>if the client launches from the wrong folder or you want a different repo on purpose.
Codex CLI
codex mcp add context-engine -- node dist/index.js
Windows example
codex mcp add context-engine -- node "D:\GitProjects\context-engine\dist\index.js"
Claude Code, Claude Desktop, Cursor, Antigravity
See docs/MCP_CLIENT_SETUP.md for copy-paste config examples for each client.
Ready-to-use sample config files live in examples/mcp-clients/.
Optional skill packages for AI workflows live in examples/skills/.
If an AI agent is setting this up
Paste this into the agent if you want it to do the setup for you:
Set up Context Engine MCP for this workspace.
- Run
npm installandnpm run build.- Register the MCP server once with
node dist/index.js.- Confirm the client launches the MCP server from the repo I am working in.
- If the client launches from the wrong folder, add
--workspace <absolute-path-to-workspace>as an override.- Confirm the server appears in the client and that
tool_manifest()or an equivalent tool list works.- Run one quick retrieval test, for example
semantic_search, to confirm the connection is working.- If startup says the workspace is unindexed or stale, let the background indexing finish or run
index_workspacemanually.If the client is Codex CLI, use:
codex mcp add context-engine -- node dist/index.js
Startup Behavior
When the server starts without --workspace, it tries to be repo-aware:
- repo root launch: uses that repo
- nested repo folder launch: upgrades to the nearest git root
- non-git folder launch: stays on the current folder and warns clearly
If startup auto-index is enabled, missing or stale workspaces start background indexing automatically.
Operator override:
- disable startup auto-index with
CE_AUTO_INDEX_ON_STARTUP=false - force a specific workspace with
--workspace "D:\path\to\repo"
Architecture
This implementation follows a clean 5-layer architecture:
┌────────────────────────────┐
│ Coding Agents (Clients) │ Layer 4: Codex, Claude, Cursor, etc.
│ Codex | Claude | Cursor │
└────────────▲───────────────┘
│ MCP (tools)
┌────────────┴───────────────┐
│ MCP Interface Layer │ Layer 3: server.ts, tools/
│ (standardized tool API) │
└────────────▲───────────────┘
│ internal API
┌────────────┴───────────────┐
│ Context Service Layer │ Layer 2: serviceClient.ts
│ (query orchestration) │
└────────────▲───────────────┘
│ domain calls
┌────────────┴───────────────┐
│ Retrieval + Review Engine │ Layer 1: local-native runtime
│ (indexing, retrieval) │
└────────────▲───────────────┘
│ storage/state
┌────────────┴───────────────┐
│ Local State / Artifacts │ Layer 5: workspace state + evidence
│ (index, cache, receipts) │
└────────────────────────────┘
Layer Responsibilities
- Layer 1: local-native indexing, retrieval, review support, and provider orchestration
- Layer 2: context assembly, snippet formatting, deduplication, limits, and caching
- Layer 3: MCP tools, validation, and request/response contracts
- Layer 4: coding agents and MCP clients that consume the tools
- Layer 5: persisted index state, caches, rollout receipts, and generated artifacts
Features
MCP Tools
The server exposes tools across these areas:
- Core context and retrieval
- Memory
- Planning and execution
- Plan management
- Code review
- Reactive review
Use tool_manifest() in the MCP server to inspect the current tool inventory directly.
Key Characteristics
- Local-first runtime for indexing and retrieval, with OpenAI-backed planning/review workflows layered on top
- Agent-agnostic MCP interface
- Local-native retrieval provider as the active runtime
- Thin
context-engine-mcplauncher for convenience; it starts the same server and does not add features - Persistent state and evidence artifacts for rollout-proof workflows
- Planning, review, and validation workflows built into the server
- Optional benchmarking, parity, and governance gates for safer changes
Quick Start
npm install
npm run build
npm run verify
node dist/index.js
Optional validation commands:
npm run ci:check:no-legacy-provider
npm run ci:check:legacy-capability-parity
npm run ci:check:legacy-capability-parity:strict
Documentation Quick Links
- Setup: docs/MCP_CLIENT_SETUP.md
- Windows Deployment: docs/WINDOWS_DEPLOYMENT_GUIDE.md
- Troubleshooting: docs/archive/TROUBLESHOOTING.md
- Testing: docs/archive/TESTING.md
- Architecture: ARCHITECTURE.md
- Memory Operations: docs/MEMORY_OPERATIONS_RUNBOOK.md
- All Docs: docs/archive/INDEX.md
Current Status
- Retrieval is local-native and index-backed
- Planning and review use the OpenAI session path
- Legacy-provider references that remain are historical docs, tests, or migration guardrails
- Current hardening focuses on fast paths, cancellation, and prompt efficiency rather than provider replacement
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi