context-engineering
Health Gecti
- License รขโฌโ License: MIT
- Description รขโฌโ Repository has a description
- Active repo รขโฌโ Last push 0 days ago
- Community trust รขโฌโ 17 GitHub stars
Code Gecti
- Code scan รขโฌโ Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions รขโฌโ No dangerous permissions requested
This project is an educational course and teaching application designed to help developers build production-ready semantic memory systems for AI assistants. It uses a hybrid RAG (Retrieval-Augmented Generation) pipeline to give AI models long-term memory capabilities via MCP.
Security Assessment
Overall risk: Low. The automated code scan of 12 files found no dangerous patterns, no hardcoded secrets, and the project requests no inherently dangerous permissions. The primary security note is regarding the `warnerco-restart` utility script, which executes shell commands (`taskkill /F` on Windows, `SIGKILL` via `lsof` on POSIX) to force-terminate processes on port 8000. While safe in its intended local development context, users should be aware of these aggressive process-termination mechanisms. The application runs locally and does not inherently expose external network endpoints or access sensitive user data.
Quality Assessment
Overall quality: Good. The project is actively maintained, with its most recent push occurring today. It has a standard, permissive MIT license, making it safe for both personal and commercial use. Community trust is currently low but positive, sitting at 17 GitHub stars, which is typical for niche educational repositories. The included documentation is comprehensive and provides clear, structured instructions for getting started.
Verdict
Safe to use, though users should review the restart script before running it locally to understand its process-termination behavior.
๐ง Stop building AI that forgets. Master MCP (Model Context Protocol) with production-ready semantic memory, hybrid RAG, and the WARNERCO Schematica teaching app. FastMCP + LangGraph + Vector/Graph stores. Your AI assistant's long-term memory starts here.
Context Engineering with MCP: Build AI Systems That Actually Remember
Welcome to the training hub for mastering Context Engineering with Model Context Protocol (MCP). This course teaches you to implement production-ready semantic memory systems for AI assistants using Python, FastAPI, FastMCP, and LangGraph.
Quick Start
Prerequisites
- Python 3.13 (pinned in
.python-version) โonnxruntime(a chromadb dependency) does not yet ship 3.14 wheels - Node.js 20+ (for Lab 01 and MCP Inspector)
- uv package manager (recommended for Python)
- Claude Desktop or Claude Code
Option 1: Hello MCP Lab (Beginner Entry Point)
git clone https://github.com/timothywarner-org/context-engineering.git
cd context-engineering/labs/lab-01-hello-mcp/starter
npm install && npm start
# Test with MCP Inspector (in another terminal)
npx @modelcontextprotocol/inspector node src/index.js
Option 2: WARNERCO Schematica (Flagship Teaching App)
cd src/warnerco/backend
uv sync
uv run uvicorn app.main:app --reload # HTTP server at http://localhost:8000
uv run warnerco-mcp # MCP stdio server for Claude Desktop
uv run warnerco-restart # Force-kill port 8000 and restart uvicorn
The warnerco-restart command (from scripts/restart_server.py) terminates anything bound to port 8000 (Windows: netstat + taskkill /F /T; POSIX: lsof + SIGKILL) before restarting. Flags: --kill-only, --port N.
Course Structure (4 x 50 Minutes)
| Segment | Topic | Focus |
|---|---|---|
| 1 | All About Context | Token economics, context loss types, why RAG isn't enough |
| 2 | All About MCP | FastMCP, FastAPI, tools, resources, prompts, elicitations |
| 3 | Semantic Memory Stores | JSON, ChromaDB, Azure AI Search, Graph Memory, Scratchpad |
| 4 | MCP in Production | Claude Desktop, Claude Code, VS Code, GitHub Copilot, LangGraph |
WARNERCO Schematica Architecture
The flagship teaching application demonstrates production MCP patterns with a 7-node hybrid RAG pipeline:
+---------------------------------------------------------------+
| FastAPI + FastMCP |
+---------------------------------------------------------------+
| LangGraph Flow (7-node Hybrid RAG) |
| parse_intent -> query_graph -> inject_scratchpad -> retrieve |
| -> compress -> reason -> respond |
+---------------------------------------------------------------+
| Hybrid Memory Layer |
| +-------------------+ +-------------------+ +-------------+ |
| | Vector Store | | Graph Store | | Scratchpad | |
| | JSON -> Chroma -> | | SQLite + NetworkX | | In-memory | |
| | Azure AI Search | | (Knowledge Graph) | | (Session) | |
| +-------------------+ +-------------------+ +-------------+ |
+---------------------------------------------------------------+
Memory Store Comparison
| Feature | JSON | ChromaDB | Azure AI Search | Graph | Scratchpad |
|---|---|---|---|---|---|
| Semantic Search | No | Yes | Yes | No | No |
| Relationship Queries | No | No | No | Yes | No |
| Session Memory | No | No | No | No | Yes |
| Best For | Prototyping | Local dev | Production | Connections | Working memory |
The knowledge graph is indexed at src/warnerco/backend/data/graph/knowledge.db with 117 entities and 221 relationships across 6 predicates (has_tag, compatible_with, belongs_to_model, has_status, has_category, contains).
Progressive Tool Loading
The server now registers 23 MCP tools, including two discovery tools that implement progressive tool loading per Anthropic's "code execution with MCP" guidance:
warn_search_tools(query, detail, limit)โ keyword discovery with detail levelsname,summary,fullwarn_describe_tool(name)โ full schema for a single tool by name
Measured token savings on this server vs. shipping all full schemas up front: 95% (summary) and 98% (name-only) โ 9064 tokens drops to 533 (summary) and 176 (name-only). Clients can list tools cheaply, then pull full schemas only for what they actually plan to call.
Repository Structure
context-engineering/
โโโ src/warnerco/backend/ # WARNERCO Schematica (FastAPI + FastMCP + LangGraph)
โโโ labs/lab-01-hello-mcp/ # Hands-on beginner lab
โโโ docs/ # Student materials, tutorials, diagrams
โโโ instructor/ # Instructor-only materials
โโโ config/ # Sample MCP client configurations
โโโ .vscode/ # VS Code workspace configuration
โโโ .claude/ # Claude Code agents and skills
โโโ CLAUDE.md # Development instructions (SOURCE OF TRUTH)
For development details, see CLAUDE.md - the source of truth for:
- Complete MCP tool reference
- API endpoint documentation
- Environment variable configuration
- LangGraph pipeline details
- Graph and Scratchpad Memory features
MCP Client Configuration
Claude Desktop
Windows: %APPDATA%\Claude\claude_desktop_config.json
Mac: ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"warnerco": {
"command": "uv",
"args": ["run", "warnerco-mcp"],
"cwd": "C:/github/context-engineering/src/warnerco/backend"
}
}
}
VS Code
See .vscode/mcp.json in the repository for local and Azure APIM configurations.
Testing with MCP Inspector
npx @modelcontextprotocol/inspector uv run warnerco-mcp
# Opens http://localhost:5173
Resources
- MCP Specification - Official protocol documentation
- FastMCP Documentation - Python MCP framework
- CLAUDE.md - Development instructions for this repository
Your Instructor
Tim Warner
Microsoft MVP - Azure AI and Cloud/Datacenter Management
Microsoft Certified Trainer (25+ years)
- Website: techtrainertim.com
- GitHub: @timothywarner
- LinkedIn: linkedin.com/in/timothywarner
License
MIT License - 2026 Timothy Warner
Now go build AI systems that actually remember!
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi