sago

agent
Security Audit
Pass
Health Pass
  • License — License: Apache-2.0
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 11 GitHub stars
Code Pass
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This tool acts as a project planner and control plane for AI coding agents. It takes a markdown description of a project and automatically generates a structured plan, requirements, and phased tasks for agents like Cursor or Claude Code to execute.

Security Assessment
Overall risk: Low. The automated code scan reviewed 12 files and found no dangerous patterns, hardcoded secrets, or requests for risky permissions. Network requests are limited to standard LLM API calls using the LiteLLM library for project generation and review, which requires an API key provided by the user. Because this is strictly a planning tool, it does not actually execute project tasks, run shell commands, or write code itself. This narrow scope significantly reduces its attack surface.

Quality Assessment
Overall quality: Good. The project is actively maintained, with repository activity as recent as today. It uses a standard, permissive open-source license (Apache-2.0). The repository features clear documentation and a well-defined workflow. While community trust is currently limited by a small footprint of 11 GitHub stars, the project passes all baseline health checks and the codebase appears clean.

Verdict
Safe to use.
SUMMARY

You describe what you want in markdown, sago generates a plan

README.md

Sago

Sago - AI project planning and orchestration

Sago: The project planner for AI coding agents

You describe the project in markdown. Sago generates and maintains the plan. Your coding agent (Claude Code, Codex, Cursor, Aider, etc.) builds it.

License: Apache 2.0
Skylos
Dead Code Free


What sago does

Sago is a planning and control-plane tool, not a coding agent. It turns your project idea into a structured, verified plan, tracks progress across phases, and then gets out of the way so a real coding agent can do the building.

You → sago init → sago plan → coding agent builds Phase 1 → sago replan → coding agent builds Phase 2 → ...

Why? AI coding agents (Claude Code, Codex, Cursor, etc.) are excellent at writing code but bad at planning entire projects from scratch. They lose track of requirements, skip steps, and produce inconsistent architectures. Sago owns the spec, plan, and phase gates so the coding agent can focus on writing code.

Sago does not execute project tasks itself. The intended workflow is:

  • Sago defines the work (PROJECT.md, REQUIREMENTS.md, PLAN.md)
  • Your coding agent executes the work
  • Sago records state, reviews completed phases, and updates the plan

Table of contents


Quick start

1. Install

pip install -e .

Requires Python 3.11+.

2. Set up your LLM provider

Create a .env file (or export the variables):

LLM_PROVIDER=openai
LLM_MODEL=gpt-4o
LLM_API_KEY=sk-your-key-here

Any LiteLLM-supported provider works — OpenAI, Anthropic, Azure, Gemini, etc. The LLM is used for planning and review, not for task execution.

For ChatGPT subscription access via LiteLLM, use the ChatGPT route model (OAuth device flow, no API key required):

LLM_PROVIDER=chatgpt
LLM_MODEL=chatgpt/gpt-5.3-codex

3. Create a project

sago init

Sago prompts for a project name and description. It generates the project scaffold:

my-project/
├── PROJECT.md          ← Vision, tech stack, architecture
├── REQUIREMENTS.md     ← What the project must do
├── PLAN.md             ← Atomic tasks with verify commands (after sago plan)
├── STATE.md            ← Progress log (updated as tasks complete)
├── CLAUDE.md           ← Instructions for the coding agent
├── IMPORTANT.md        ← Rules the coding agent must follow
└── .planning/          ← Runtime artifacts (cache, traces)

If you provide a description during init, the AI generates PROJECT.md and REQUIREMENTS.md for you. Otherwise, fill them in yourself.

4. Generate the plan

sago plan

Sago reads your PROJECT.md and REQUIREMENTS.md, detects your environment (Python version, OS, platform), and generates a PLAN.md with:

  • Atomic tasks grouped into phases
  • Task-level dependency ordering via depends_on
  • Verification commands for each task
  • A list of third-party packages needed
  • Semantic validation (duplicate IDs, dependency cycles, missing fields, etc.)

Sago validates the plan automatically — if it finds structural errors (cycles, invalid dependencies, missing task IDs), it retries once with error feedback. You're shown validation results and asked to accept or reject before the plan is written.

Use sago plan --yes for fully non-interactive plan generation. It now skips both the final accept/reject prompt and the placeholder-content warning that normally protects untouched template files.

5. Hand off to your coding agent

Point your coding agent at the project and tell it to follow the plan:

Claude Code:

cd my-project
claude
# Claude Code reads CLAUDE.md automatically and follows the plan

Cursor / Other agents:
Open the project directory. The agent should read PLAN.md and execute tasks in order, running each <verify> command and then sago checkpoint to record progress.

6. Watch your agent work

In a separate terminal, launch mission control:

sago watch

This opens a live dashboard in your browser that shows task completion, file activity, and phase progress — updated every second as your coding agent works through the plan.

7. Review between phases

After your coding agent finishes a phase, run the phase gate:

sago replan

This reviews completed phases only, shows findings (warnings, suggestions), saves the review to STATE.md, shows actionable recommendations (e.g. "task failed 2+ times — consider replanning"), and optionally lets you adjust the plan before the next phase. Just press Enter to skip replanning if the review looks good.

8. Track progress

sago status              # quick summary + recommendations
sago status -d           # detailed per-task breakdown
sago lint-plan           # validate plan without running anything
sago doctor              # check project + environment health

How it works

┌─────────────────────────────────────────────────────┐
│  1. SPEC                                            │
│     You write PROJECT.md + REQUIREMENTS.md          │
│     (or describe your idea and sago generates them) │
└──────────────────────┬──────────────────────────────┘
                       ▼
┌─────────────────────────────────────────────────────┐
│  2. PLAN (sago)                                     │
│     Sago calls an LLM to generate PLAN.md:          │
│     - Atomic tasks with verification commands       │
│     - Task-level dependencies (depends_on DAG)      │
│     - Environment-aware (Python version, OS)        │
│     - Lists required third-party packages           │
└──────────────────────┬──────────────────────────────┘
                       ▼
┌─────────────────────────────────────────────────────┐
│  3. BUILD (your coding agent)                       │
│     Claude Code / Cursor / Aider executes tasks:    │
│     - Runs sago next to get the next task           │
│     - Follows <action> instructions                 │
│     - Runs <verify> commands                        │
│     - Runs sago checkpoint to record progress       │
└──────────────────────┬──────────────────────────────┘
                       ▼
┌─────────────────────────────────────────────────────┐
│  4. REVIEW (sago replan)                            │
│     Between phases, reviews completed work:         │
│     - Runs ReviewerAgent on finished phases         │
│     - Shows warnings, suggestions, issues           │
│     - Saves review to STATE.md                      │
│     - Optionally updates the plan with feedback     │
└──────────────────────┬──────────────────────────────┘
                       ▼
              (repeat 3→4 for each phase)
                       ▼
┌─────────────────────────────────────────────────────┐
│  5. TRACK (sago)                                    │
│     sago status shows progress                      │
│     Dashboard shows real-time updates               │
└─────────────────────────────────────────────────────┘

Sago is the project manager. Your coding agent is the developer. The markdown files are the contract between them.


Using with Claude Code

Sago generates a CLAUDE.md file during sago init that Claude Code reads automatically. It tells Claude Code how to follow the plan, execute tasks in order, and record progress via sago checkpoint.

sago init my-project --prompt "A weather dashboard with FastAPI and PostgreSQL"
cd my-project
sago plan
claude

Claude Code picks up CLAUDE.md on startup and understands the task format. The agent runs sago next to get its assignment, executes it, then calls sago checkpoint to record progress. When all tasks in a phase are done, sago automatically detects it and prompts the agent to run sago replan.


Using with Cursor

sago init my-project --prompt "A weather dashboard with FastAPI and PostgreSQL"
cd my-project
sago plan

Copy the sago workflow instructions into Cursor's rules file so the agent knows how to work:

cp CLAUDE.md .cursorrules

Then open the project in Cursor and use Agent mode. Tell it:

"Run sago next to get the next task. Execute it, run the verify command, then run sago checkpoint to record progress. Repeat."

Cursor's agent will follow the plan the same way Claude Code does.


Using with Aider

sago init my-project --prompt "A weather dashboard with FastAPI and PostgreSQL"
cd my-project
sago plan

Feed the plan and project context to Aider:

aider --read PLAN.md --read PROJECT.md --read REQUIREMENTS.md

Then tell it which task to work on:

"Execute task 1.1 from PLAN.md. Create the files listed, follow the action instructions, then run the verify command."

Work through tasks one at a time since Aider works best with focused, single-task instructions.


Using with any other agent

Sago's output is just markdown files. Any coding agent that can read files and run commands works. The agent needs to:

  1. Read PROJECT.md — the project vision, tech stack, and architecture
  2. Read REQUIREMENTS.md — what the project must do
  3. If PLAN.md has a <dependencies> block, install those packages first
  4. Run sago next to get the next task — it shows the task details, dependencies, and context
  5. Run each task's <verify> command — it must exit 0 before moving on
  6. Run sago checkpoint <task_id> after each task to record progress in STATE.md
  7. Repeat from step 4 until all tasks are complete

The CLAUDE.md file generated by sago init contains these instructions in a format most agents understand. Rename or copy it to whatever your agent expects (.cursorrules, .github/copilot-instructions.md, etc.).


Mission control

While your coding agent builds the project, run mission control in a separate terminal:

sago watch                   # launch dashboard (auto-opens browser)
sago watch --port 8080       # use a specific port
sago watch --path ./my-app   # point to a different project

The dashboard shows:

  • Overall progress — progress bar with task count and percentage
  • Phase tree — every phase and task with live status icons (done, failed, pending)
  • File activity — new and modified files detected in the project directory
  • Dependencies — packages listed in PLAN.md
  • Per-phase progress bars — at a glance, which phases are done

It polls STATE.md every second — as sago checkpoint records task results, the dashboard updates automatically. No extra dependencies (stdlib HTTP server + os.stat). Mission control also reads trace data from the target project's own .planning/trace.jsonl, so sago watch --path ./other-project no longer leaks runtime artifacts into your current shell directory.

Trace Events

Mission control already includes a Trace tab. To capture plan/replan events, enable tracing in your environment:

ENABLE_TRACING=true

Then run sago plan or sago replan, followed by:

sago watch

The dashboard will read .planning/trace.jsonl and show the live event feed when trace data is present. Trace spans now keep stable span_id values across paired *_start / *_end events, which makes the feed easier to consume from external tooling.


Commands

sago init                            # interactive: prompts for name + description
sago init [name]                     # quick scaffold with templates
sago init [name] --prompt "desc"     # generate spec files from a prompt via LLM
sago init -y                         # non-interactive, all defaults
sago plan                            # generate PLAN.md from requirements
sago plan --yes                      # auto-accept plan without confirmation
sago checkpoint 1.1 --notes "done"   # record task completion in STATE.md
sago checkpoint 1.2 -s failed -n "import error"  # record failure
sago checkpoint 2.1 -d "Chose JWT"   # record with key decision
sago next                            # show next actionable task with full details
sago lint-plan                       # validate PLAN.md for structural/semantic issues
sago lint-plan --strict              # treat warnings as errors
sago lint-plan --json                # machine-readable JSON output
sago doctor                          # run project and environment diagnostics
sago judge                           # configure the judge/reviewer model
sago replan                          # phase gate: review completed work, optionally update plan
sago watch                           # launch mission control dashboard
sago watch --port 8080               # use a specific port
sago status                          # show project progress + recommendations
sago status -d                       # detailed per-task breakdown

Flags for sago plan

Flag What it does
--force / -f Regenerate PLAN.md if it already exists
--yes / -y Fully non-interactive plan generation: skips the placeholder warning and the final confirmation prompt

Configuration

Create a .env file in your project directory:

LLM_PROVIDER=openai
LLM_MODEL=gpt-4o
LLM_API_KEY=your-key-here
LLM_TEMPERATURE=0.1
LLM_MAX_TOKENS=4096
LOG_LEVEL=INFO

Any LiteLLM-supported provider works. Set LLM_MODEL to the provider's model identifier (e.g., claude-sonnet-4-5-20250929, gpt-4o, gemini/gemini-2.0-flash).

ChatGPT subscription mode is also supported through LiteLLM:

LLM_PROVIDER=chatgpt
LLM_MODEL=chatgpt/gpt-5.3-codex
# No LLM_API_KEY required; LiteLLM handles OAuth device flow and token storage.

Task format

Tasks in PLAN.md use XML inside markdown:

<phases>
  <dependencies>
    <package>flask>=2.0</package>
    <package>sqlalchemy>=2.0</package>
  </dependencies>

  <review>
    Review instructions for post-phase code review...
  </review>

  <phase name="Phase 1: Setup">
    <task id="1.1">
      <name>Create config module</name>
      <files>src/config.py</files>
      <action>Create configuration with pydantic settings...</action>
      <verify>python -c "import src.config"</verify>
      <done>Config module imports successfully</done>
    </task>

    <task id="1.2" depends_on="1.1">
      <name>Add database layer</name>
      <files>src/db.py</files>
      <action>Create database module using config...</action>
      <verify>python -c "import src.db"</verify>
      <done>Database module imports successfully</done>
    </task>
  </phase>
</phases>
  • <dependencies> — third-party packages needed, with version constraints
  • <review> — instructions for reviewing each phase's output
  • <task> — atomic unit of work with files, action, verification, and done criteria
  • depends_on — optional attribute on <task> listing comma-separated task IDs this task depends on. Omit it to depend on all prior tasks in the phase (sequential by default). Use it to express that a task has no dependencies or only specific ones, enabling parallel execution of independent tasks.

Why sago

The planning problem. AI coding agents are great at writing code for a well-defined task. But ask them to build an entire project from a vague description and they lose track of requirements, skip steps, pick incompatible dependencies, and produce inconsistent architectures. The gap isn't in code generation — it's in project planning.

Sago fills that gap. It uses an LLM to generate a structured, verified plan with atomic tasks, dependency ordering, and environment-aware dependency suggestions. Then it hands off to whatever coding agent you prefer.

Model-agnostic planning. Sago uses LiteLLM for plan generation, so you're not locked into any provider. Use OpenAI, Anthropic, Azure, Gemini, Mistral — whatever gives you the best plans.

Agent-agnostic execution. Sago doesn't care what builds the code. Claude Code, Cursor, Aider, Copilot, a human — anything that can read markdown and follow instructions. Sago generates the plan; you choose the builder.

Spec-first, always. Every sago project has a reviewable spec (PROJECT.md, REQUIREMENTS.md) and a reviewable plan (PLAN.md) before any code is written. You see exactly what will be built and can adjust before spending time or tokens on execution.


Sago vs GSD

GSD (Get Shit Done) is a great project that inspired sago. Both solve the same core problem — AI coding agents are bad at planning — but they take different approaches.

Sago GSD
What it is Standalone CLI tool (pip install) Prompt system loaded into Claude Code
Coding agent Any — Claude Code, Cursor, Aider, Copilot, a human Claude Code only (uses its sub-agent spawning)
Planning LLM Any LiteLLM provider (OpenAI, Anthropic, Gemini, etc.) Claude (via Claude Code)
Execution You hand off to your coding agent GSD spawns executor agents in fresh contexts
Context management Not sago's concern — your agent manages its own context Core feature — fights "context rot" by spawning fresh 200k-token windows per task
Phase transitions Explicit phase gate (sago replan) with code review and optional replan Automatic wave-based execution with /gsd:execute-phase
Research You write PROJECT.md + REQUIREMENTS.md (or generate from a prompt) Spawns parallel researcher agents to investigate the domain
Review ReviewerAgent runs between phases via sago replan, saves findings to STATE.md /gsd:verify-work with interactive debug agents

When to use GSD: You use Claude Code exclusively and want a fully automated pipeline — research, plan, execute, verify — all within Claude Code's sub-agent system. GSD's context rotation (fresh windows per task) is its killer feature for large projects.

When to use sago: You want to use different coding agents (or switch between them), want to use a non-Claude LLM for planning, or prefer an explicit human-in-the-loop workflow where you review the plan and gate phase transitions yourself. Sago is the project manager and control plane; you pick the developer.


Development

pip install -e ".[dev]"       # install with dev dependencies

pytest                        # run all tests
pytest tests/test_parser.py -v              # single file
pytest tests/test_parser.py::test_name -v   # single test

ruff check src/               # lint
black src/ tests/             # format
mypy src/                     # type check (strict mode)
skylos src/                   # dead code detection

Acknowledgements

This project was vibecoded with Claude Code.

Sago takes inspiration from:

  • GSD (Get Shit Done) — spec-driven development and sub-agent orchestration for Claude Code
  • Claude Flow — multi-agent orchestration platform with wave-based task coordination

Dead code is kept in check by Skylos.


License

Apache 2.0. See LICENSE.

Reviews (0)

No results found