zeus

agent
Security Audit
Pass
Health Pass
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 12 GitHub stars
Code Pass
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This tool is a structured, AI-driven project management framework designed to guide long-running software projects. It uses deterministic planning, wave-based task execution, and production feedback loops to automate development workflows and Git operations.

Security Assessment
The framework relies heavily on executing shell commands and directly interacts with your local Git environment, notably by instructing users to install a custom Git commit hook (`commit-msg`). A lightweight scan of the repository found no dangerous code patterns, hardcoded secrets, or excessive permission requests. However, because it acts as an agent that executes tasks and makes atomic commits on your behalf, it inherently requires read and write access to your source code and Git history. Overall risk is rated as Medium; while the current codebase appears safe, AI agents executing automated shell commands always warrant careful supervision.

Quality Assessment
The project is very new but shows active maintenance, with its most recent push happening today. It utilizes the standard, permissive MIT license and provides clear, comprehensive documentation in both English and Chinese. Community trust is currently minimal given its niche scope, reflected by its 12 GitHub stars. The architecture is well-organized, outputting structured JSON and Markdown files rather than opaque binaries.

Verdict
Use with caution: the code is safe and well-documented, but because it acts as an autonomous shell agent, you should review its generated commands before allowing it to modify your codebase.
SUMMARY

Structured, version-aware AI delivery framework for long-running projects. Deterministic planning (Spec/PRD/Task/Roadmap) + wave-based atomic execution + Git-hook enforcement + zero-manual multi-platform testing (Android/iOS/Chrome) + closed-loop evolution from real production feedback.

README.md

Zeus - AI Project Evolution Operating System

Language
Workflow
Status
License

Structured, version-aware AI delivery framework for long-running projects.

Zeus combines:

  • deterministic planning artifacts (spec, prd, task, roadmap),
  • wave-based execution with atomic commits,
  • mandatory attribution loop from production feedback to roadmap evolution.

Language: English | 简体中文

Quick Start

# 1) Install the commit message hook (one-time)
cp .zeus/hooks/commit-msg .git/hooks/commit-msg

# 2) Initialize the Zeus project workspace
/zeus:init

# Optional for brownfield repositories: map the existing codebase first
/zeus:discover --depth auto

# 3) Build the first design spec
/zeus:brainstorm --full

# 4) Convert approved spec to executable artifacts
/zeus:plan

# 5) Run pending tasks in dependency waves
/zeus:execute

Workflow

English workflow diagram:

Zeus Workflow EN

Chinese workflow diagram:

Zeus Workflow ZH

Skill Commands

Command Purpose Main Output
/zeus:init Initialize Zeus workspace and north star metrics .zeus/main/config.json, evolution.md
/zeus:discover [--version v2] [--depth quick|auto|full] Map existing codebase and generate brownfield context artifacts codebase-map.json, existing-modules.json, tech-inventory.md, architecture.md
/zeus:brainstorm --full Full-scope design dialogue and spec authoring .zeus/main/specs/*.md
/zeus:brainstorm --feature <name> Single-feature design loop feature spec
/zeus:plan [--version v2] Convert spec to user stories and tasks prd.json, task.json, roadmap.json
/zeus:execute [--version v2] Execute pending tasks wave by wave atomic commits, task pass states
/zeus:test-gen [--version v2] [--platforms android,chrome,ios] AI-generate platform test flows from task/prd artifacts {version}/tests/*.test.json
/zeus:feedback Capture feedback and run attribution feedback/*.json, evolution entry
/zeus:evolve Create a new version branch/folder model .zeus/vN/*
/zeus:status Render global status report and next action health snapshot + recommendation

Repository Layout

.zeus/
  main/
    config.json
    prd.json
    task.json
    roadmap.json
    evolution.md
    feedback/
    ai-logs/
    specs/
    tests/
      android.test.json   ← AI-generated, do not edit manually
      chrome.test.json
      ios.test.json
  v2/ ... vN/
  schemas/
    config.schema.json
    codebase-map.schema.json
    existing-modules.schema.json
    prd.schema.json
    task.schema.json
    roadmap.schema.json
    spec.schema.json
    feedback.schema.json
    ai-log.schema.json
    test-flow.schema.json
  scripts/
    zeus-runner.sh
    generate-tests.sh
    collect-metrics.sh

## Brownfield Adoption

For existing repositories, run this path:

```bash
# 1) Build codebase context artifacts
/zeus:discover --version main --depth auto

# 2) Initialize config using discovered context
/zeus:init --import-existing --version main

# 3) Design and plan a scoped feature against existing modules
/zeus:brainstorm --feature <name> --version main
/zeus:plan --version main

# 4) Execute with wave gates
/zeus:execute --version main

This keeps Zeus backward-compatible for greenfield projects while adding safe brownfield onboarding.
hooks/
commit-msg

.claude/
skills/zeus-/SKILL.md
agents/
.md

assets/
zeus-workflow.en.svg
zeus-workflow.zh-CN.svg


## Agent Model

Zeus uses phase-specific agents under `.claude/agents`:

- `zeus-researcher`: context discovery and dependency checks
- `zeus-planner`: spec decomposition and artifact shaping
- `zeus-executor`: wave execution orchestration with quality gates
- `zeus-analyst`: attribution confidence and evolution decisions
- `zeus-docs`: bilingual consistency and docs quality checks
- `zeus-tester`: AI test case authoring for android / chrome / ios platforms

Skills should delegate intentionally:
- brainstorming -> researcher
- plan -> planner
- execute -> executor
- test generation -> tester (via `generate-tests.sh`)
- feedback/evolve -> analyst
- docs quality checks -> docs

## Testing

Zeus uses AI-generated test flows. **Do not write test cases manually.**

```bash
# Generate test flows for all platforms (after zeus:plan)
bash .zeus/scripts/generate-tests.sh --version main --platforms android,chrome,ios

# Or via skill
/zeus:test-gen

# Target a single platform
/zeus:test-gen --platforms chrome

# Regenerate (overwrite existing)
bash .zeus/scripts/generate-tests.sh --version main --force

Generated files live at .zeus/{version}/tests/{platform}.test.json and conform to .zeus/schemas/test-flow.schema.json.

Test execution uses the native platform toolchain directly:

Platform Toolchain
Android adb shell
Chrome chrome-cli / Chrome DevTools Protocol
iOS xcrun simctl / libimobiledevice

Test flows are regenerated automatically when /zeus:test-gen is invoked, and optionally after each execution wave completes.

AI Log Contract

Each skill execution must append one markdown log in ai-logs/:

## Decision Rationale
Why this approach was selected.

## Execution Summary
What changed and where.

## Target Impact
Expected impact on the north star metric.

Commit Convention

feat(T-003): implement user registration form
fix(T-007): correct session token expiry
docs(zeus): update prd from auth-design spec
chore(zeus): initialize v2 evolution

Troubleshooting

  • If /zeus:* commands are not discovered, restart your AI runtime session.
  • If execution stalls, verify .zeus/scripts/zeus-runner.sh is executable.
  • If task updates fail, check JSON validity in .zeus/*/task.json.
  • If commit hook fails, re-copy .zeus/hooks/commit-msg into .git/hooks/.

Contributing

  1. Keep prompt specs deterministic and artifact-driven.
  2. Keep shell snippets in English only.
  3. Preserve backward compatibility for core .zeus schema files.
  4. Add docs updates for any workflow changes.

Acknowledgements / 友链

Contact / 交流群

交流群

License

MIT License — see LICENSE.

Reviews (0)

No results found