langgraph-starter-kit

mcp
Guvenlik Denetimi
Basarisiz
Health Uyari
  • License — License: Apache-2.0
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 5 GitHub stars
Code Basarisiz
  • execSync — Synchronous shell command execution in create-langgraph-app/create.ts
  • process.env — Environment variable access in create-langgraph-app/create.ts
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
This project is a starter kit and MCP integration that helps developers quickly scaffold production-ready multi-agent applications. It provides pre-configured agent patterns, multiple LLM provider integrations, and an HTTP server with streaming capabilities.

Security Assessment
The overall risk is rated as Medium. The tool does not request overtly dangerous permissions and no hardcoded secrets were found. However, it does access environment variables to handle API keys and provider credentials, which is standard but requires proper configuration. The primary security concern is a failed audit check regarding synchronous shell command execution (`execSync`) within the scaffolding script. If this CLI is run in a context with untrusted inputs, it could potentially be exploited for arbitrary command execution. Additionally, as an LLM application framework, it inherently makes external network requests to interact with AI providers and web search tools.

Quality Assessment
The project appears to be actively maintained, with its last push occurring very recently. It benefits from a standard Apache-2.0 open-source license and includes continuous integration workflows. The main drawback is extremely low community visibility. With only 5 GitHub stars, the codebase has not undergone widespread peer review or battle-testing. Consequently, the community trust level is currently low, and developers should expect to rely on their own code reviews for troubleshooting.

Verdict
Use with caution — the low community adoption and presence of synchronous shell execution mean you should review the CLI script carefully before running it.
SUMMARY

Production-ready multi-agent starter kit for LangGraph — 7 patterns, 5 LLM providers, CLI scaffolder, MCP integration, and HTTP server with SSE streaming.

README.md

LangGraph Starter Kit

The fastest way to build production-ready multi-agent apps with LangGraph.
7 patterns. 5 providers. One command.

CI License TypeScript LangGraph Stars

Quick StartPatternsProvidersExamplesAPIContributing


Why This Exists

Building multi-agent systems with LangGraph means writing the same boilerplate over and over — setting up supervisors, wiring handoff tools, configuring providers, adding persistence. This starter kit gives you all of that out of the box so you can focus on your agent logic, not infrastructure.

npx create-langgraph-app

What you get:

  • Pick your LLM provider (OpenAI, Anthropic, Google, Groq, or local Ollama)
  • Choose which agent patterns you need
  • Get a ready-to-run project with tests, types, and a Fastify server

Or clone the full kit with all 7 patterns included.

Architecture

graph TD
    A[LangGraph Starter Kit] --> B["CLI Demo · npm run dev"]
    A --> C["HTTP Server · npm run dev:http"]
    A --> D[LangGraph Studio]

    B --> P
    C --> P
    D --> P

    subgraph P["7 Agent Patterns"]
        P1[Supervisor]
        P2[Swarm]
        P3[HITL]
        P4[Structured Output]
        P5[Research Agent]
        P6[RAG]
        P7[Customer Support]
    end

    subgraph F["Agent Factory"]
        F1["makeAgent()"]
        F2["makeSupervisor()"]
        F3["makeSwarm()"]
    end

    subgraph T["Tools"]
        T1[Local]
        T2[Web Search]
        T3[RAG]
        T4[Support]
        T5[MCP]
    end

    P --> L
    F --> L
    T --> L

    subgraph L["5 LLM Providers"]
        L1[OpenAI]
        L2[Anthropic]
        L3[Google]
        L4[Groq]
        L5["Ollama · local"]
    end

Features

Feature Description
Patterns 7 Agent Patterns Swarm, Supervisor, HITL, Structured Output, Research, RAG, Customer Support
CLI Scaffolder npx create-langgraph-app — interactive project generator
Providers 5 LLM Providers OpenAI, Anthropic, Google, Groq, Ollama — switch with one env var
Tools MCP Integration Connect external tools via Model Context Protocol
Server HTTP + SSE Fastify server with invoke, streaming, resume, and thread history
Debug LangGraph Studio langgraph.json included for visual graph debugging
Observe LangSmith Tracing Full observability with one env var
Persist Memory + Postgres In-memory for dev, PostgreSQL-ready for production
Deploy Docker + CI Docker Compose with Postgres, GitHub Actions CI
Test 34+ Tests Tools, config, agents — all tested with vitest

Quick Start

Option A: Scaffold a new project (recommended)

npx create-langgraph-app

Interactive CLI — pick your provider, choose your patterns, get a project:

  ╔═══════════════════════════════════════╗
  ║     create-langgraph-app              ║
  ╚═══════════════════════════════════════╝

  Project name (my-langgraph-app): my-agents
  
  LLM provider?
    1. OpenAI (gpt-4o-mini)
    2. Anthropic (Claude Sonnet)
    3. Google (Gemini 2.0 Flash)
    4. Groq (Llama 3.3 70B)
    5. Ollama (local, no API key)

  Which patterns?
    1. Supervisor
    2. Swarm
    3. Human-in-the-Loop
    4. Structured Output
    5. RAG

  Done! cd my-agents && npm run dev

Option B: Clone the full starter kit

git clone https://github.com/ac12644/langgraph-starter-kit.git
cd langgraph-starter-kit
npm install
cp .env.example .env    # Add your API key
npm run dev             # Run all 7 patterns
npm run dev:http        # Start HTTP server on :3000

LLM Providers

Switch providers with one env var. Each has a sensible default model:

Provider LLM_PROVIDER Default Model API Key
OpenAI openai gpt-4o-mini OPENAI_API_KEY
Anthropic anthropic claude-sonnet-4-20250514 ANTHROPIC_API_KEY
Google google gemini-2.0-flash GOOGLE_API_KEY
Groq groq llama-3.3-70b-versatile GROQ_API_KEY
Ollama ollama llama3.2 None (runs locally)
# .env — just two lines to switch
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...

Agent Patterns

1. Supervisor

A central coordinator routes tasks to specialized workers. Best for: structured workflows with clear task delegation.

curl -X POST http://localhost:3000/supervisor/invoke \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "sum 10 and 15, then write a summary"}]}'

2. Swarm

Agents hand off to each other peer-to-peer using transfer tools. Best for: open-ended conversations where the right agent depends on context.

curl -X POST http://localhost:3000/swarm/invoke \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "talk to bob then add 5 and 7"}]}'

3. Human-in-the-Loop

Pauses the graph for human approval before dangerous actions. Best for: high-stakes operations — deletions, payments, emails.

# Trigger an action that needs approval
curl -X POST http://localhost:3000/interrupt/invoke \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "delete record rec_2"}], "thread_id": "hitl-1"}'

# Approve it
curl -X POST http://localhost:3000/interrupt/resume \
  -H "Content-Type: application/json" \
  -d '{"thread_id": "hitl-1", "decision": "yes"}'

4. Structured Output

Returns typed JSON validated by Zod. Best for: extracting structured data — summaries, classifications, entities.

curl -X POST http://localhost:3000/analyst/invoke \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "Analyze: Revenue grew 25% but churn increased 8%"}]}'

5. Research Agent

Web search + URL scraping coordinated by a supervisor. Best for: gathering and synthesizing information from the web.

curl -X POST http://localhost:3000/researcher/invoke \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "Research multi-agent AI systems"}]}'

6. RAG (Retrieval-Augmented Generation)

In-memory vector store with semantic search. Best for: answering questions about your own documents/knowledge base.

curl -X POST http://localhost:3000/rag/invoke \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "What is the supervisor pattern?"}]}'

7. Customer Support Bot

Multi-agent support system with a router that delegates to billing, tech support, and returns specialists. Includes escalation to human operators. Best for: customer-facing products with different support domains.

curl -X POST http://localhost:3000/support/invoke \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "I am customer C-1002. I was charged $29.99 but my plan is free. Can you help?"}]}'

Streaming

Every app supports SSE for real-time token streaming:

curl -N http://localhost:3000/supervisor/stream \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "what is 2+2?"}]}'

MCP Integration

Extend your agents with external tools via Model Context Protocol:

cp mcp-servers.example.json mcp-servers.json
# Edit mcp-servers.json with your MCP server configs
# Set MCP_SERVERS_PATH=./mcp-servers.json in .env

Supports both stdio (local) and http (remote) transports. Tools are auto-injected into swarm and supervisor apps.

Observability

LangGraph Studio

langgraph dev    # Visual graph debugging

LangSmith Tracing

LANGCHAIN_TRACING_V2=true
LANGSMITH_API_KEY=ls_...
LANGSMITH_PROJECT=langgraph-starter-kit

API Reference

Route Method Description
/:app/invoke POST Invoke agent, return final result
/:app/stream POST SSE token streaming
/:app/resume POST Resume paused graph (HITL)
/:app/threads/:id GET Get thread state
/:app/threads/:id/history GET Full state history
/health GET Health check

Apps: swarm supervisor interrupt analyst researcher rag support

Examples

Real-world agent apps with full documentation:

Example Description Patterns
Customer Support Bot Billing, tech support, returns routing with human escalation Supervisor, HITL
Research Agent Web search + report writing pipeline Supervisor
RAG Agent Document indexing + semantic retrieval Supervisor, RAG

Each example has its own README with architecture diagrams, tool reference, usage examples, and customization guide.

Project Structure

src/
├── config/
│   ├── env.ts              # Environment + provider validation
│   ├── llm.ts              # Multi-provider LLM factory
│   ├── embeddings.ts       # Multi-provider embeddings factory
│   └── checkpointer.ts     # Memory (dev) / Postgres (prod)
├── tools/
│   ├── local.ts            # Built-in tools (add, multiply, echo)
│   ├── web.ts              # Web search + URL scraping
│   ├── rag.ts              # Vector store + retrieval
│   ├── support.ts          # Customer support tools
│   └── mcp.ts              # MCP external tool loader
├── agents/
│   ├── factory.ts          # makeAgent() — agent builder
│   ├── supervisor.ts       # makeSupervisor() wrapper
│   ├── swarm.ts            # makeSwarm() wrapper
│   └── handoff.ts          # createHandoffTool() — transfers
├── apps/
│   ├── supervisor.ts       # Supervisor pattern
│   ├── swarm.ts            # Swarm pattern
│   ├── interrupt.ts        # Human-in-the-loop
│   ├── analyst.ts          # Structured output
│   ├── researcher.ts       # Research agent
│   ├── rag.ts              # RAG agent
│   └── support.ts          # Customer support bot
├── server/index.ts         # Fastify HTTP server
└── index.ts                # CLI demo
examples/
├── customer-support/       # Full customer support bot docs
├── research-agent/         # Research agent docs
└── rag-agent/              # RAG agent docs

Deploy

# Docker Compose (includes Postgres)
docker compose up

# Standalone Docker
docker build -t langgraph-starter .
docker run -p 3000:3000 --env-file .env langgraph-starter
Platform How
Railway Deploy with this repo URL
Render Connect repo — uses render.yaml
Docker docker compose up anywhere

Adding Your Own Agent

Create a file, wire it up, done:

// src/apps/my-agent.ts
import { llm } from "../config/llm";
import { makeAgent } from "../agents/factory";
import { makeSupervisor } from "../agents/supervisor";

export function createMyApp() {
  const agent = makeAgent({
    name: "my_agent",
    llm,
    tools: [/* your tools */],
    system: "You are a helpful assistant.",
  });

  return makeSupervisor({
    agents: [agent],
    llm,
    outputMode: "last_message",
    supervisorName: "my_supervisor",
  });
}

Register in src/server/index.ts and you're live.

Contributing

Contributions are welcome! Whether it's a new agent pattern, bug fix, documentation improvement, or just a typo — every bit helps.

See CONTRIBUTING.md for guidelines.

First time contributing? Look for issues labeled good first issue.

Community

  • Questions? Open a Discussion
  • Bug? File an Issue
  • Want to contribute? See CONTRIBUTING.md
  • Like it? Give it a star — it helps others find the project

License

Apache License 2.0 — same license as LangChain. Use it freely in personal and commercial projects.

Yorumlar (0)

Sonuc bulunamadi