ai-orchestrator
Health Pass
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 69 GitHub stars
Code Fail
- rm -rf — Recursive force deletion command in scripts/analyze_project.sh
Permissions Pass
- Permissions — No dangerous permissions requested
This tool is a portable, multi-agent developer setup that orchestrates local LLMs via Bash to automate software development workflows. It routes tasks like planning, coding, and reviewing through a pipeline powered by Claude Code and local Ollama models.
Security Assessment
Overall Risk: Medium. The tool executes shell commands natively and interacts with local APIs via `curl`, which is standard for its intended orchestration purpose. It makes network requests to local endpoints (Ollama) and tracks token usage in the user's home directory. The primary security flag is the presence of a recursive force deletion command (`rm -rf`) inside `scripts/analyze_project.sh`. While this might be intended for harmless temporary cleanup, any unsanitized or improperly scoped `rm -rf` command poses a risk of accidental local data loss. No hardcoded secrets were detected, and it does not request dangerous system permissions.
Quality Assessment
The project is actively maintained, with its most recent push occurring today. It is transparent about its operations, relies on standard command-line utilities (`jq`, `curl`), and is fully licensed under the permissive MIT license. The repository has accumulated 69 GitHub stars, indicating a fair level of early community trust and active interest from developers.
Verdict
Use with caution: the tool is actively maintained and open, but users should manually inspect the `rm -rf` command in the analysis script before execution to prevent potential local data loss.
Portable multi-agent AI developer setup for Claude Code + Ollama. Role-based local LLM orchestration via Bash — plan, code, review, commit. Zero Dependency. Works with any language stack.
ai-orchestrator
Portable AI Developer setup: agents, slash commands, IDE orchestration, and language standards.
Works with any project — TypeScript, Python, Flutter, Swift, C++.
All orchestration is handled via pure Bash and jq.
What's included
ai-orchestrator/
├── documentation/
│ ├── CLAUDE.md # Global instructions for Claude CLI
│ └── ai_rules.md # Orchestration rules for embedded IDE Agents
├── agents/ # Subagents (run automatically via /implement)
│ ├── planner.md # Explores codebase, writes implementation plan
│ ├── coder.md # Generates code via local Ollama (role: coder)
│ ├── reviewer.md # Reviews code against standarts (role: reviewer)
│ ├── quick-coder.md # Fast fixes (role: commit)
│ ├── commit.md # Stages and commits changes (role: commit)
│ ├── doc-writer.md # Creates/updates documentation (role: reviewer)
│ └── test-agent.md # Writes and runs tests (role: coder)
├── commands/ # Slash commands
│ ├── implement.md # /implement — full plan → code → review pipeline
├── skills/ # Language coding standarts
│ ├── ...-code-standarts.md
├── scripts/
│ ├── call_ollama.sh # Central LLM interface (Bash + jq + curl)
│ ├── local-commit.sh # Fast local LLM-driven git commits
│ ├── open-pr.sh # Local LLM-driven Pull Request descriptions
│ ├── analyze_hardware.sh # Auto-configures models based on your RAM/GPU
│ ├── analyze_project.sh # Multi-agent tiered project structure analysis
│ └── install.sh # Installer — configures dependencies and symlinks
└── llm-config.json # Centralized model roles (symlinked to ~/.claude/)
How it works
The core workflow is a pipeline triggered by /implement:
planner → coder → build check → reviewer(s) → verdict
- Zero Python dependency: All agents now call
call_ollama.shdirectly, which usescurlandjqfor API interaction. - Role-based Config: One source of truth for all models in
llm-config.json. - Portable: symlinks ensure that updates to this repo apply globally to your system immediately.
Token Savings Tracker
After each /implement run, scripts/track_savings.sh estimates the tokens processed by local Ollama and appends the result to ~/.claude/token_stats.json. Savings are calculated against Claude Sonnet pricing ($3/M input, $15/M output) using a 1 token ≈ 4 chars approximation.
View accumulated stats with the /stats command:
| Period flag | Description |
|---|---|
day |
Current calendar day |
week |
Last 7 days |
month |
Last 30 days |
/stats week
───────────────────────────────
ai-orchestrator savings
Period: this week
Runs: 12
Tokens saved: ~186,000
Estimated saving: $7.20
───────────────────────────────
scripts/stats.sh can also be called directly from the terminal.
Requirements
- Claude Code CLI installed
- Ollama installed and running
jq(JSON processor) —install.shwill attempt to install it via brew/apt.
1. One-Liner Installation (Recommended)
All you need is one command to check dependencies, clone the repository, and optimize models for your hardware:
curl -sSL https://raw.githubusercontent.com/Mybono/ai-orchestrator/main/scripts/install.sh | bash
2. Manual Installation
If you prefer to do it manually:
git clone https://github.com/Mybono/ai-orchestrator ~/Projects/ai-orchestrator
cd ~/Projects/ai-orchestrator
chmod +x scripts/install.sh
./scripts/install.sh
What happens during installation
- Software Check: Scripts detect and help install
jqandOllama. - Environment Setup: Symlinks created in
~/.claude/, shell aliases added. - Hardware Analysis: System RAM/GPU analyzed to pick the best models.
- Configuration:
llm-config.jsongenerated and optimized for your machine.
Configuration (llm-config.json)
The system uses roles to determine which model to use for which task. The config is stored in the project root and symlinked to ~/.claude/llm-config.json.
| Role | Default Model | Purpose |
|---|---|---|
coder |
qwen2.5-coder:14b... |
Heavy code generation (main agent) |
reviewer |
qwen2.5-coder:7b |
Code review and documentation |
commit |
qwen2.5-coder:7b |
Commit messages and tiny fixes |
embedding |
nomic-embed-text |
Semantic search and RAG |
Example of llm-config.json:
{
"models": {
"coder": "hf.co/bartowski/Qwen2.5-Coder-14B-Instruct-GGUF:IQ4_XS",
"reviewer": "qwen2.5-coder:7b",
"commit": "qwen2.5-coder:7b",
"embedding": "nomic-embed-text"
}
}
IDE Agent Delegation Workflow (Antigravity & Cursor)
IDE agents (like Antigravity) act as the Architect but delegate heavy lifting to local models via call_ollama.sh:
- Coding: Uses the
coderrole fromllm-config.json. - Review: Uses the
reviewerrole.
The delegation command:
# Uses the model defined for the 'coder' role in your config
bash ~/.claude/call_ollama.sh --role coder --prompt "implement X" --context-file /tmp/context.md
Project Onboarding
To use these orchestration rules in your project (so IDE agents like Antigravity/Cursor can see them):
Copy the rules from the system directory to your project root:
cp ~/.claude/ai_rules.md ~/Projects/your-project/ai_rules.md(Optional) Multi-agent support: You can also name it
.cursorrulesor.clauderulesif you use those specific tools.Check delegation: Once added, your IDE agent should start using
call_ollama.shfor heavy lifting instead of spending your cloud tokens.
Project Analysis
The system includes a sophisticated analysis tool that provides a deep understanding of any project's architecture.
# Run from your project root
analyze_project
How it works (Multi-Agent Tiered Analysis)
The script orchestrates multiple local models in parallel:
- Structure Agent (7B): Rapidly maps the folder hierarchy and functional blocks.
- Documentation Agent (14B): Reads and summarizes all discovered
.mdfiles. - Logic Agent (14B): Analyzes entry points, core classes, and tech-stack patterns.
The findings are synthesized into a Delta Report (.claude/context/analysis_delta.md). The IDE agent then reviews this report and merges relevant updates into the authoritative project_overview.md.
Updating
cd ~/Projects/ai-orchestrator
git pull
Changes apply immediately — no reinstall needed.
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found