ai-engineering
Health Pass
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 25 GitHub stars
Code Pass
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
- Permissions — No dangerous permissions requested
This tool transforms any repository into a governed AI workspace. It enforces quality gates, security scanning, and risk management locally via git hooks, providing a standardized workflow for various AI coding assistants.
Security Assessment
The framework operates locally and utilizes git hooks, which inherently requires the execution of local shell commands and file system access. However, the automated code scan (12 files checked) found no dangerous patterns, hardcoded secrets, or requests for overly broad permissions. The tool is designed to keep governance inside the repo rather than relying on external servers.
Overall risk: Low.
Quality Assessment
The project appears to be in active development, with its last push occurring today. It is distributed under the highly permissive MIT license. While the community trust is currently small (25 GitHub stars), the repository demonstrates strong professional indicators. It utilizes standard Python packaging, continuous integration (CI), and integrates third-party quality assurance tools like SonarCloud and Snyk.
Verdict
Safe to use.
Turn any repo into a governed AI workspace. Quality gates, security scanning, and risk management — enforced locally via git hooks. Works with Claude Code, GitHub Copilot, Cursor, Gemini & Codex.
47 skills. 10 agents. 4 IDEs. One governed workflow.
AI governance that developers actually want -- for teams that ship.
ai-engineering turns any repository into a governed AI workspace. Governance is content-first: policies, skills, agents, runbooks, and specs all live as versioned files inside the repo -- no hosted control plane, no vendor lock-in. It works across Claude Code, GitHub Copilot, OpenAI Codex, and Gemini CLI from the same repository.
Install · Quick Start · What You Get · How It Works · CLI · Slash Commands · Inspirations · Contributing
Install
Prerequisites: Python 3.11+ and Git.
Recommended: pipx (isolated, global)
pipx install ai-engineering
Alternative: uv
uv tool install ai-engineering
Fallback: pip (requires a virtual environment)
python -m venv .venv && source .venv/bin/activate
pip install ai-engineering
Verify
ai-eng version
Update to latest version
# pipx
pipx upgrade ai-engineering
# uv
uv tool upgrade ai-engineering
# pip
pip install --upgrade ai-engineering
After upgrading, run ai-eng update in each project to pull the latest skills, contexts, and runbooks, then ai-eng doctor to verify.
Quick Start
cd your-project
ai-eng install .
ai-eng doctor
install scaffolds the governance root, detects your stack, and mirrors skills to every configured IDE. It also auto-installs missing tools (ruff, gitleaks, ty, pip-audit) via your OS package manager. doctor validates the installation, checks tooling, and reports anything that needs attention.
See GETTING_STARTED.md for the full tutorial.
Migration -- spec-101 install contract (BREAKING)
Starting with this release ai-eng install and ai-eng doctor --fix --phase tools enforce a hard, data-driven contract. If you are upgrading from a pre-spec-101 install, walk through this section once.
EXIT 80 / EXIT 81 -- hard fail on missing tooling
Two reserved exit codes replace the previous best-effort silent pass:
| Code | Meaning |
|---|---|
EXIT 80 |
A required CLI tool is missing or unverifiable after install. Examples: ruff, ty, gitleaks, pip-audit, prettier, eslint, vitest, staticcheck, phpstan, cargo-audit, ktlint, swiftlint, sqlfluff, shellcheck, clang-tidy. |
EXIT 81 |
A language SDK / prerequisite from prereqs.sdk_per_stack is missing. Examples: JDK, Swift toolchain, Dart SDK, .NET SDK, Go toolchain, Rust toolchain, PHP, clang/LLVM. |
Migration: remove any ai-eng install || true shielding from your CI scripts. The framework now surfaces failures explicitly so you can fix them, not paper over them. If a tool is genuinely unsupported on a host OS, declare it via platform_unsupported (tool-level, max 2 of 3 OSes) or escalate via platform_unsupported_stack (stack-level, may list all 3) -- both require a non-empty unsupported_reason (D-101-03 + D-101-13). See .ai-engineering/manifest.yml > required_tools for working examples.
platform_unsupported -- tool vs stack scope
Two governance keys control where unsupported markers may appear:
platform_unsupportedlives on a single tool inside a stack's tool list. Caps at 2 of 3 OSes; using it for all 3 is rejected by the model validator. Example:semgrepcarriesplatform_unsupported: [windows].platform_unsupported_stacklives on the entire stack block when the whole toolchain has no native binaries on a given OS. May list all 3 OSes. Example: theswiftstack carriesplatform_unsupported_stack: [linux, windows]becauseswiftlintandswift-formatship for macOS only.
Both keys require an unsupported_reason field; the lint refuses an unreasoned escalation.
python_env.mode decision tree
python_env.mode defaults to uv-tool. Three values exist:
┌──────────────────────────────────┐
Need a fresh worktree │ uv-tool (default, recommended)│
to be fast (< 30 s)? │ tools install once into │
│ ~/.local/share/uv/tools/ │
└──────────────────────────────────┘
│
▼
Need .venv/ for legacy ┌──────────────────────────┐
workflows │ venv │
(source .venv/bin/...)? │ per-cwd .venv/ │
│ classic, slow worktree │
└──────────────────────────┘
│
▼
Want a single .venv ┌──────────────────────────┐
shared across worktrees │ shared-parent │
(requires git repo)? │ .venv at repo root, │
│ linked from worktrees │
└──────────────────────────┘
Set the value in .ai-engineering/manifest.yml:
python_env:
mode: uv-tool # or: venv | shared-parent
A full reference (.ai-engineering/contexts/python-env-modes.md) covers migration commands and trade-offs.
14 stacks covered by required_tools
A single manifest.yml > required_tools block drives both ai-eng install and ai-eng doctor --fix. The 14 supported stacks are:
| # | Stack | Representative tools |
|---|---|---|
| 1 | python | ruff, ty, pip-audit, pytest |
| 2 | typescript | prettier, eslint, tsc, vitest |
| 3 | javascript | prettier, eslint, vitest |
| 4 | java | checkstyle, google-java-format |
| 5 | csharp | dotnet-format |
| 6 | go | staticcheck, govulncheck |
| 7 | php | phpstan, php-cs-fixer, composer |
| 8 | rust | cargo-audit |
| 9 | kotlin | ktlint |
| 10 | swift | swiftlint, swift-format (macOS only) |
| 11 | dart | dart-stack-marker |
| 12 | sql | sqlfluff |
| 13 | bash | shellcheck, shfmt |
| 14 | cpp | clang-tidy, clang-format, cppcheck |
Plus a universal baseline block (gitleaks, semgrep, jq) that applies to every stack.
First-run banner
The first install after upgrading prints a one-shot BREAKING banner to stderr. The banner mentions EXIT 80/81, the python_env.mode flip, and the 14-stack scope. It only fires once per project -- the flag persists in .ai-engineering/state/install-state.json (breaking_banner_seen).
What You Get
47 Skills
Skills are slash commands that encode team workflows as repeatable, governed procedures. Each skill carries its own trigger patterns, validation gates, and output contracts.
| Group | Skills |
|---|---|
| Workflow | brainstorm, plan, dispatch, code, test, debug, verify, review, eval, schema |
| Delivery | commit, pr, release-gate, cleanup, market |
| Enterprise | security, governance, pipeline, docs, board-discover, board-sync, platform-audit |
| Teaching | explain, guide, write, slides, media, video-editing |
| Design | design, animation, canvas |
| SDLC | note, standup, sprint, postmortem, support, resolve-conflicts |
| Meta | create, learn, prompt, start, analyze-permissions, instinct, autopilot, run, constitution, skill-evolve |
10 Agents
Agents are role-based specialists that skills dispatch to. Each agent has a defined mandate, boundaries, and output contract.
| Agent | Role |
|---|---|
| plan | Architecture, specs, decomposition |
| build | Code generation with quality gates |
| verify | Evidence-first verification (7 specialist lenses) |
| guard | Governance, compliance, policy enforcement |
| review | Narrative code review (9 specialist lenses) |
| explore | Deep codebase research and analysis |
| guide | Onboarding, teaching, knowledge transfer |
| simplify | Reduce complexity, refactor, extract |
| autopilot | Autonomous multi-spec execution |
| run-orchestrator | Source-driven backlog execution |
14 Runbooks
Self-contained Markdown automation contracts. Each runbook carries its own purpose, cadence, hierarchy rules, and expected outputs. All are human-in-the-loop: they prepare work items but never touch code.
| Cadence | Runbooks |
|---|---|
| Daily | triage, refine, feature-scanner, stale-issues |
| Weekly | dependency-health, code-quality, security-scan, docs-freshness, performance, governance-drift, architecture-drift, work-item-audit, consolidate, wiring-scanner |
Contexts
14 language contexts (bash, C++, C#, Dart, Go, Java, JavaScript, Kotlin, PHP, Python, Rust, SQL, Swift, TypeScript) and 15 framework contexts (Android, API Design, ASP.NET Core, Backend Patterns, Bun, Claude API, Deployment Patterns, Django, Flutter, iOS, MCP SDK, Next.js, Node.js, React, React Native) ship with the framework. These are loaded at session start based on your project's detected stack and applied to all code generation and review.
Quality Gates
Enforced on every commit, not just in CI.
| Gate | Threshold |
|---|---|
| Test coverage | >= 80% |
| Code duplication | <= 3% |
| Cyclomatic complexity | <= 10 per function |
| Cognitive complexity | <= 15 per function |
| Blocker/critical issues | 0 |
| Security findings (medium+) | 0 |
| Secret leaks | 0 |
| Dependency vulnerabilities | 0 |
Tooling: ruff + ty (lint/format), pytest (test), gitleaks (secrets), pip-audit (deps).
How It Works
ai-eng install . creates a governance root alongside IDE-specific mirrors:
your-project/
├── .ai-engineering/ # governance root
│ ├── contexts/ # language, framework, and team context
│ ├── runbooks/ # automation contracts
│ ├── runs/ # autonomous execution state
│ ├── scripts/ # hooks and helpers
│ ├── specs/ # active spec and plan
│ ├── state/ # decisions, events, capabilities
│ └── LESSONS.md # persistent learning across sessions
├── .claude/ # Claude Code skills + agents (canonical)
├── .codex/ # OpenAI Codex mirror
├── .gemini/ # Gemini CLI mirror
├── .github/ # GitHub Copilot mirror
├── AGENTS.md # Codex instruction file
├── CLAUDE.md # Claude Code instruction file
└── GEMINI.md # Gemini CLI instruction file
Three ownership boundaries
| Boundary | What it covers | How it changes |
|---|---|---|
| Framework-managed | Skills, agents, runbooks, gates | ai-eng update -- preview before apply |
| Team-managed | contexts/team/**, lessons, constitution |
Your team edits directly |
| Project-managed | Specs, plans, decisions, work-item state | Generated during workflow execution |
Multi-IDE mirroring
.claude/ is the canonical surface. Running ai-eng sync regenerates all other IDE mirrors (.codex/, .gemini/, .github/) from the canonical source. One set of skills, consistent behavior across all four IDEs.
CLI Commands
| Command | Purpose |
|---|---|
ai-eng install [TARGET] |
Scaffold governance into a project |
ai-eng update [TARGET] |
Preview and apply framework updates |
ai-eng doctor [TARGET] |
Validate installation and tooling |
ai-eng validate [TARGET] |
Check manifest and structural integrity |
ai-eng verify [TARGET] |
Run verification checks |
ai-eng sync |
Regenerate IDE mirrors from canonical source |
ai-eng spec verify|list|catalog|compact |
Manage specs |
ai-eng decision record|list|expire-check |
Track architectural decisions |
ai-eng release <VERSION> |
Cut a release |
ai-eng version |
Print current version |
ai-eng gate pre-commit|commit-msg|pre-push|risk-check|all |
Run quality gates |
ai-eng stack add|remove|list |
Manage project stacks |
ai-eng ide add|remove|list |
Manage IDE configurations |
ai-eng provider add|remove|list |
Manage AI provider mirrors |
ai-eng workflow commit|pr|pr-only |
Delivery workflows |
ai-eng maintenance report|pr|all |
Repository maintenance |
ai-eng setup platforms|github|sonar |
Platform onboarding |
ai-eng work-item sync |
Sync work items with board |
ai-eng skill status |
Show skill installation status |
ai-eng vcs status|set-primary |
Version control configuration |
ai-eng guide |
Interactive onboarding |
Slash Commands
Skills are invoked as slash commands inside your IDE. The two primary flows:
Spec-driven flow
The default path for planned work after install and health-check:
/ai-start --> /ai-brainstorm --> /ai-plan --> /ai-dispatch --> /ai-verify --> /ai-pr
(start) (spec) (plan) (execute) (evidence) (ship)
Backlog-driven flow
Autonomous execution against a work-item backlog:
/ai-run --> intake --> explore --> waves --> /ai-pr
(start) (filter) (context) (execute) (ship)
Key commands
| Command | What it does |
|---|---|
/ai-start |
Bootstrap the session with context, dashboard, and active work |
/ai-brainstorm |
Define requirements as a structured spec |
/ai-plan |
Decompose a spec into executable tasks |
/ai-dispatch |
Execute one approved plan |
/ai-autopilot |
Execute a multi-spec DAG autonomously |
/ai-run |
Execute a source-driven backlog run |
/ai-review |
Architecture-aware code review (9 specialist lenses) |
/ai-verify |
Evidence-backed verification (7 specialist lenses) |
/ai-pr |
Open, watch, and merge the pull request |
Standing on the shoulders of...
ai-engineering builds on ideas, patterns, and principles from these projects:
| Project | What we learned |
|---|---|
| Superpowers | Brainstorm hard-gate, TDD-for-skills patterns |
| review-code | Handler-as-workflow architecture, parallel specialist agents, finding-validator |
| dotfiles/ai | Agent matrix, SDLC coverage patterns |
| autoresearch | Radical simplicity as a design principle |
| Emil Kowalski | Motion principles, spring physics, easing strategy |
| SpecKit | Spec-driven workflow inspiration |
| GSD | Autonomous execution patterns |
| Anthropic Skills | Frontend-design, canvas, skill-creator -- absorbed and extended |
Contributing
Contributions are welcome. See CONTRIBUTING.md for development setup, code style, testing, and pull request guidelines.
Code of conduct
This project follows the Contributor Covenant Code of Conduct. See CODE_OF_CONDUCT.md.
License
MIT. See LICENSE.
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found