ContribAI
Autonomous AI agent that contributes to open source — discovers repos, analyzes code, generates fixes, and submits PRs
ContribAI
AI Agent that automatically contributes to open source projects on GitHub
🏆 Results
| Metric | Count |
|---|---|
| PRs Submitted | 43+ |
| PRs Merged | 9 |
| PRs Closed | 14 |
| Repos Contributed | 21+ |
| Notable Repos | Worldmonitor (45k⭐), Maigret (19k⭐), AI-Research-SKILLs (6k⭐), s-tui (5k⭐) |
Set it up once, wake up to merged PRs. See the Hall of Fame →
ContribAI discovers open source repositories, analyzes code for improvements, generates fixes, and submits Pull Requests — all autonomously.
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Discovery│───▶│ Analysis │───▶│Generator │───▶│ PR + CI │───▶│ Patrol │
│ │ │ 20 skills│ │ LLM + │ │ Fork, │ │ Auto-fix │
│ Find repos│ │ Security │ │ self- │ │ commit, │ │ review │
│ by lang, │ │ quality, │ │ review, │ │ create │ │ feedback │
│ stars │ │ perf │ │ scoring │ │ PR + CLA │ │ & reply │
└──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘
Safety: Quality gate (7-check scorer), duplicate detection, AI policy respect, CI monitoring, rate limiting, dry-run mode
Quick Start
# Install
git clone https://github.com/tang-vu/ContribAI.git
cd ContribAI
pip install -e ".[dev]"
# Configure
cp config.example.yaml config.yaml
# Edit config.yaml with your GitHub token + LLM API key
# Run
contribai hunt # Autonomous: discover repos → analyze → PR
contribai target <repo_url> # Target a specific repo
contribai run --dry-run # Preview without creating PRs
Features
| Category | Highlights |
|---|---|
| Analysis | Security (secrets, SQLi, XSS), code quality, performance, docs, UI/UX, refactoring |
| LLM | Gemini, OpenAI, Anthropic, Ollama, Vertex AI — smart task routing across model tiers |
| Hunt Mode | Multi-round autonomous hunting, cross-file fixes, inter-repo delay |
| PR Patrol | Monitors PRs for review feedback, auto-responds and pushes code fixes |
| MCP Server | 14 tools for Claude Desktop + Antigravity IDE via stdio protocol |
| Safety | AI policy detection, CLA auto-signing, quality gate, duplicate prevention |
| Platform | Web dashboard, scheduler, webhooks, Docker, profiles, plugins |
| Notifications | Slack, Discord, Telegram with retry |
Usage
# Hunt mode (autonomous)
contribai hunt # Discover and contribute
contribai hunt --rounds 5 --delay 15 # 5 rounds, 15min delay
contribai hunt --mode issues # Issue solving only
# Target specific repos
contribai target <repo_url> # Analyze and contribute
contribai solve <repo_url> # Solve open issues
# Monitor & maintain
contribai patrol # Respond to PR reviews
contribai status # Check submitted PRs
contribai stats # Overall statistics
contribai cleanup # Remove stale forks
# Platform
contribai serve # Dashboard at :8787
contribai schedule --cron "0 */6 * * *" # Auto-run every 6h
# Profiles
contribai profile security-focused # Run with preset profile
Configuration
# config.yaml
github:
token: "ghp_your_token" # or set GITHUB_TOKEN env var
llm:
provider: "gemini" # gemini | openai | anthropic | ollama
model: "gemini-2.5-flash"
api_key: "your_api_key"
discovery:
languages: [python, javascript]
stars_range: [100, 5000]
See config.example.yaml for all options.
Architecture
contribai/
├── core/ # Config, models, middleware, events, retry, quotas
├── llm/ # Multi-provider LLM + task routing + context management
├── github/ # GitHub API client, discovery, guidelines
├── analysis/ # 20+ analysis skills + framework detection + compression
├── generator/ # Fix generation + self-review + quality scoring
├── orchestrator/ # Pipeline, SQLite memory (7 tables), review gate
├── pr/ # PR lifecycle + patrol + CLA/DCO compliance
├── issues/ # Issue classification + multi-file solving
├── agents/ # Sub-agent registry (DeerFlow-inspired)
├── tools/ # Extensible tool protocol
├── mcp/ # MCP client for external tools
├── mcp_server.py # MCP server (14 tools for Claude Desktop)
├── sandbox/ # Docker-based code validation
├── web/ # FastAPI dashboard + webhooks + auth
├── scheduler/ # APScheduler cron automation
├── notifications/ # Slack, Discord, Telegram
├── plugins/ # Entry-point plugin system
├── templates/ # YAML contribution templates
└── cli/ # Rich CLI + TUI
See docs/system-architecture.md for detailed architecture.
Docker
docker compose up -d dashboard # Dashboard at :8787
docker compose run --rm runner run # One-shot run
docker compose up -d dashboard scheduler # Dashboard + scheduler
Testing
pytest tests/ -v # Run all 431 tests
pytest tests/ -v --cov=contribai # With coverage
ruff check contribai/ # Lint
ruff format contribai/ # Format
Extending
Plugins — Create custom analyzers/generators as Python packages:
from contribai.plugins.base import AnalyzerPlugin
class MyAnalyzer(AnalyzerPlugin):
@property
def name(self): return "my-analyzer"
async def analyze(self, context):
return findings
# pyproject.toml
[project.entry-points."contribai.analyzers"]
my_analyzer = "my_package:MyAnalyzer"
MCP — Use ContribAI from Claude Desktop or Antigravity IDE:
// Claude Desktop: ~/.config/claude/claude_desktop_config.json
// Antigravity IDE: ~/.gemini/antigravity/mcp_config.json
{
"mcpServers": {
"contribai": {
"command": "python",
"args": ["-m", "contribai.mcp_server"]
}
}
}
Documentation
| Doc | Description |
|---|---|
HALL_OF_FAME.md |
9 merged · 14 closed across 21+ repos — real results |
system-architecture.md |
Pipeline, middleware, events, LLM routing |
code-standards.md |
Conventions, patterns, testing |
deployment-guide.md |
Install, Docker, config, CLI reference |
project-roadmap.md |
Version history and future plans |
codebase-summary.md |
Module map and tech stack |
CONTRIBUTING.md |
Contribution guidelines |
License
AGPL-3.0 + Commons Clause — see LICENSE for details.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi