orchestra
Health Gecti
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 13 GitHub stars
Code Gecti
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
This tool is an open-source AI agent orchestration platform built on LangGraph, designed to manage and deploy autonomous agents using the MCP and A2A protocols.
Security Assessment
While the automated code scan found no dangerous patterns, hardcoded secrets, or dangerous permission requests, the project explicitly includes tools for executing shell commands and Docker container operations. Additionally, setting up the platform requires external network requests to third-party AI providers (like OpenAI or Anthropic). Because the tool is designed to orchestrate autonomous agents that can interact with system infrastructure, the overall risk is rated as Medium. Users must be careful to restrict the tool's execution environment and properly secure their API keys.
Quality Assessment
The project is actively maintained, with its last push occurring today. It is protected by the standard Apache-2.0 license and utilizes a Developer Certificate of Origin (DCO) for contributions. Community trust is currently low but growing, as it has only accumulated 13 GitHub stars. The documentation is comprehensive and offers clear deployment guides for both self-hosted Docker environments and managed cloud instances.
Verdict
Use with caution: the code itself is clean, but the inherent risks of autonomous shell execution and container management require you to strictly configure your environment boundaries and safeguard your API keys.
Steerable Harnesses for DeepAgents — Orchestra🪶
Open-source AI agent orchestration platform built on LangGraph and powered by the MCP & A2A protocols.
Self-host for free or let us deploy it for you. Your agents, your data, your infrastructure.
🚀 Deployment Options
| Option | Best For | Get Started |
|---|---|---|
| Community (Free) | Developers, self-hosting | docker pull ghcr.io/ruska-ai/orchestra:latest |
| Managed Cloud | Teams wanting convenience | chat.ruska.ai |
| Enterprise | Organizations needing SSO, compliance, SLA | Contact Us |
📖 Table of Contents
This project includes tools for running shell commands and Docker container operations. For detailed information, please refer to the following documentation:
🐳 Docker Deployment (GHCR)
We publish the backend image to GitHub Container Registry (GHCR). For the full Docker/Docker Compose deployment guide (env setup, services, migrations, troubleshooting), jump to Docker Deployment details.
docker pull ghcr.io/ruska-ai/orchestra:latest
📋 Prerequisites
- Docker Installed
- Python 3.11 or higher
- Access to OpenAI API (for GPT-4o model) or Anthropic API (for Claude 3.5 Sonnet)
🛠️ Development
Quick Reference
| Command | Description |
|---|---|
make dev |
Start backend server (port 8000) |
make dev.worker |
Start TaskIQ worker |
make test |
Run all backend tests |
make format |
Format code with Ruff |
make seeds.user |
Seed default users |
make migrate.up |
Apply all pending migrations |
For all commands, see backend/Makefile.
Environment Variables:
Create a
.envfile in the root directory and add your API key(s):# Backend cd <project-root>/backend cp .example.env .env # Frontend cd <project-root>/frontend cp .example.env .envEnsure that your
.envfile is not tracked by git by checking the.gitignore:Start Docker Services
Below will start the database service.
cd <project-root> docker compose up postgres
Dockerized Dev Stack
For containerized local development with hot reload, the backend, frontend, worker, and supporting services can all run under docker-compose.dev.yml.
cd <project-root>
BACKEND_ENV_FILE=$HOME/.env/orchestra/.env.backend \
FRONTEND_ENV_FILE=$HOME/.env/orchestra/.env.frontend \
make dev.docker.up
Useful endpoints while debugging:
- Frontend:
http://localhost:5173 - Backend API:
http://localhost:8000/docs - Dozzle log viewer:
http://localhost:8088
Tail the main service logs in one stream:
make dev.docker.logs
If you also want the sandbox exec server inside the stack, start with COMPOSE_PROFILES=tools make dev.docker.up.
Setup Server Environment
Manual setup (if Makefile unavailable)cd <project-root>/backend make devAssumes you're using astral uv.
cd <project-root>/backend uv venv source .venv/bin/activate uv sync bash scripts/dev.shSetup Client Environment
# Change Directory cd <project-root>/frontend # Install npm install # Run npm run dev
Database Migrations
This project uses Alembic for database migrations. Here's how to work with migrations:
Initial Setup
Create the database (if not exists):
cd backend alembic upgrade headpython -m seeds.user_seederCreate new
alembic revision -m "description_of_changes"### Appliy Next alembic upgrade +1 ### Speicif revision alembic upgrade <revis_id> ### Appliy Down alembic downgrade -1 ### Appliy Down alembic downgrade <revis_id> ### History alembic history
Run Playwright MCP Locally
Start Ngrok on port 8931
ngrok http 8931Run MCP server
npx @playwright/mcp@latest \ --port 8931 \ --executable-path $HOME/.cache/ms-playwright/chromium-<version>/chrome-linux/chrome \ --vision
🤝 Integrations
🗺️ Roadmap
Stay up to date on Discord. Full release history in Changelog.md.
March 2026
| Feature | Category | Status |
|---|---|---|
| Human-In-The-Loop | Agent Control | 🔵 Planned |
February 2026
| Feature | Category | Status |
|---|---|---|
| Search Threads | UX | ✅ Shipped |
| Migrate Memories Seeder | Data | ✅ Shipped |
| Docs Agent Guidance | Docs | 🟡 In Progress |
| RLM Skill | Skills | ✅ Shipped |
| Frontend Schedule Refactor | Scheduling | ✅ Shipped |
January 2026
| Feature | Category | Status |
|---|---|---|
| Distributed Workers (TaskIQ) | Infra | ✅ Shipped |
| Public Agents | Agents | ✅ Shipped |
| File Tree Sidebar | UX | ✅ Shipped |
| AWS Model Support | Integrations | ✅ Shipped |
| Shareable Thread Links | UX | ✅ Shipped |
| Subagent Tool Calls | UX | ✅ Shipped |
| User Default Settings | Settings | ✅ Shipped |
| Speech Dictation | UX | ✅ Shipped |
See Changelog.md for the full release history.
🏢 Enterprise
For organizations needing managed deployment, compliance, or dedicated support:
| Feature | Description |
|---|---|
| SSO/SAML | Integrate with your identity provider |
| Audit Logging | Comprehensive logs for compliance |
| Air-Gapped Deployment | Run in isolated environments |
| Priority Support | SLA-backed response times |
| Custom Integrations | Connect to your internal tools |
We partner with you to deploy Orchestra inside your infrastructure. Contact us to discuss your requirements.
🐳 Docker Deployment (GHCR / Docker Compose)
This section covers deploying the Orchestra backend using Docker. For local development, see the sections above.
📋 Prerequisites
- Docker installed
- Docker Compose installed
- Access to AI provider API keys (OpenAI, Anthropic, etc.)
🚀 Quick Start
Using Pre-built Image
Pull the latest image from GitHub Container Registry:
docker pull ghcr.io/ruska-ai/orchestra:latest
1. Environment Setup
Create a .env.docker file in the backend/ directory:
cd backend
cp .example.env .env.docker
Update the following values for Docker networking:
# Database - use container name instead of localhost
POSTGRES_CONNECTION_STRING="postgresql://admin:test1234@postgres:5432/orchestra?sslmode=disable"
# Tools - use container names for internal services
SEARX_SEARCH_HOST_URL="http://search_engine:8080"
SHELL_EXEC_SERVER_URL="http://exec_server:3005/exec"
2. Start Services
From the project root directory:
# Start database and backend
docker compose up postgres orchestra
# Or start all services
docker compose up
3. Verify Deployment
The API will be available at http://localhost:8000
- API Docs:
http://localhost:8000/docs - Health Check:
http://localhost:8000/health
🧩 Docker Compose Services
| Service | Port | Description |
|---|---|---|
orchestra |
8000 | Backend API |
postgres |
5432 | PostgreSQL with pgvector |
minio |
9000/9001 | S3-compatible file storage |
search_engine |
8080 | SearXNG search engine |
exec_server |
3005 | Shell execution server |
ollama |
11434 | Local LLM inference (requires GPU) |
redis |
6379 | Redis message broker (for workers) |
worker |
- | TaskIQ worker (no exposed port) |
🧱 Docker Compose Example
services:
# PGVector
postgres:
image: pgvector/pgvector:pg16
container_name: postgres
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: test1234
POSTGRES_DB: postgres
ports:
- "5432:5432"
# Server (use pre-built image or build locally)
orchestra:
image: ghcr.io/ruska-ai/orchestra:latest
container_name: orchestra
env_file: .env.docker
ports:
- "8000:8000"
depends_on:
- postgres
🏗️ Build Commands
Build with Script (Recommended)
The build script copies the Docker deployment README into the image and handles tagging:
# From project root
bash backend/scripts/build.sh
# Or with custom tag
bash backend/scripts/build.sh v1.0.0
Build with Docker Compose
docker compose build orchestra
Manual Build
# Copy README first, then build
cp docker/README.md backend/README.md
cd backend
docker build -t orchestra:local .
⚙️ Environment Variables
Application Config
| Variable | Description | Default |
|---|---|---|
APP_ENV |
Environment (development/production) | development |
APP_LOG_LEVEL |
Logging level | DEBUG |
APP_SECRET_KEY |
Application secret key | - |
JWT_SECRET_KEY |
JWT signing key | - |
USER_AGENT |
User agent string for requests | ruska-dev |
TEST_USER_ID |
Test user UUID | - |
Database
| Variable | Description | Default |
|---|---|---|
POSTGRES_CONNECTION_STRING |
PostgreSQL connection string | - |
AI Providers (at least one required)
| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY |
OpenAI API key | - |
GROQ_API_KEY |
Groq API key | - |
ANTHROPIC_API_KEY |
Anthropic API key | - |
XAI_API_KEY |
xAI API key | - |
OLLAMA_BASE_URL |
Ollama server URL | - |
Tool Config
| Variable | Description | Default |
|---|---|---|
SEARX_SEARCH_HOST_URL |
SearXNG search endpoint | http://localhost:8080 |
SHELL_EXEC_SERVER_URL |
Shell execution endpoint | http://localhost:3005/exec |
TAVILY_API_KEY |
Tavily search API key | - |
Distributed Workers (Optional)
| Variable | Description | Default |
|---|---|---|
REDIS_URL |
Redis connection for task queue | - |
DISTRIBUTED_WORKERS |
Enable distributed worker mode | false |
Note: When enabled, run the worker process separately:
make dev.worker
Storage
| Variable | Description | Default |
|---|---|---|
MINIO_HOST |
MinIO/S3 host URL | - |
S3_REGION |
S3 region | - |
ACCESS_KEY_ID |
S3 access key | - |
ACCESS_SECRET_KEY |
S3 secret key | - |
BUCKET |
S3 bucket name | enso_dev |
🗄️ Database Migrations
Run migrations inside the container:
# Using docker compose exec
docker compose exec orchestra alembic upgrade head
# Or run migrations before starting
docker compose run --rm orchestra alembic upgrade head
🚢 Production Considerations
Security
- Generate strong values for
APP_SECRET_KEYandJWT_SECRET_KEY - Use SSL/TLS termination (nginx, traefik, etc.)
- Restrict database access to internal networks
- Never expose
.envfiles
Performance
- Configure appropriate resource limits in
docker-compose.yml - Use a reverse proxy for load balancing
- Enable PostgreSQL connection pooling for high traffic
Dockerfile Features
The Dockerfile uses a multi-stage build:
- Builder Stage: Installs dependencies, compiles Python to bytecode (
.pyc) - Runtime Stage: Ships only compiled bytecode for smaller image size
Note: Migration files (
.py) are preserved since Alembic requires source files.
🧰 Troubleshooting
Container won't start
# Check logs
docker compose logs orchestra
# Verify environment file exists
ls -la backend/.env.docker
Database connection failed
# Ensure postgres is running
docker compose ps postgres
# Check postgres logs
docker compose logs postgres
Port already in use
# Check what's using the port
lsof -i :8000
# Or change the port mapping in docker-compose.yml
ports:
- "8001:8000" # Map to different host port
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi