astro-agent
Health Gecti
- License — License: NOASSERTION
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 33 GitHub stars
Code Uyari
- process.env — Environment variable access in src/cli.ts
Permissions Gecti
- Permissions — No dangerous permissions requested
This tool is an orchestrator for AI coding agents. It decomposes complex goals into tasks and executes them in parallel across multiple machines, streaming the results back to a centralized web dashboard.
Security Assessment
Risk Rating: High. The application's core function is to act as a remote agent runner, meaning it inherently executes shell commands and manages local system processes to control installed AI tools (like Claude Code or Codex). It requires user registration and actively communicates with a hosted cloud service for task delegation. While no hardcoded secrets were found and the environment variable access in the CLI is standard for authentication, the fundamental architecture gives a third-party web service the ability to trigger and control code execution on your local machine.
Quality Assessment
The project appears highly active, with its most recent repository push occurring today. It features a detailed README and targets a modern Node.js environment. However, the automated license check returned a warning (`NOASSERTION`), although the documentation explicitly states it uses the BSL-1.1 license. Developers should note that BSL-1.1 is a source-available license that restricts commercial use in competing production environments. Community trust is currently very low, with only 33 GitHub stars, indicating a small or early-stage user base.
Verdict
Use with caution: the tool is well-maintained and functions exactly as advertised, but installing it grants a remote service the ability to execute code on your machine, requiring high trust in the developer's security and intentions.
Mission control for the AI age.
Astro Agent Runner
Connect your machines. Let AI do the work.
Website
·
Get Started
·
Examples
📰 News
| Date | Update |
|---|---|
| 2026-03-19 | 🤖 Pi Coding Agent support — Astro now natively supports Pi, the coding agent powering OpenClaw. Auto-detected at launch alongside Claude Code, Codex, and OpenCode. Full streaming, tool result rendering, session preservation, and multi-turn resume. |
What is Astro?
Astro is an orchestrator for AI coding agents. It takes a complex goal, decomposes it into a dependency graph of tasks, and executes them in parallel across your machines — your laptop, GPU servers, HPC clusters, cloud VMs.
Mission control lives in the browser. Your machines do the work. The Agent Runner is the piece that runs on each machine — it receives tasks, runs AI agents, and streams results back.
Self-hosting is on the roadmap. Currently Astro runs as a hosted service at astroanywhere.com.
Quick Start
Step 1 — Register
Create an account at astroanywhere.com.
Step 2 — Install
Install at least one AI coding agent:
npm i -g @anthropic-ai/claude-code # Claude Code
npm i -g @openai/codex # Codex
npm i -g openclaw # OpenClaw
bun i -g opencode # OpenCode
Optionally install GitHub CLI (gh) for automatic PR creation — recommended but not required.
Then launch the agent runner:
npx @astroanywhere/agent@latest launch
One command. It detects your AI agents, discovers your machine hardware, finds your SSH hosts, authenticates you, and starts listening for tasks. No global install — npx fetches the latest version.
Step 3 — Start Building
Open the Astro Dashboard, create a project, and describe what you want to build. Try one of these to get started:
- "Add dark mode support to my React app" — a single focused task
- "Build a REST API with auth, CRUD endpoints, and tests" — Astro decomposes this into parallel tasks
- "Refactor the data layer to use a repository pattern" — multi-step refactoring across files
Or jump straight in and describe your own goal. Astro will generate a plan, show you the dependency graph, and execute across your machines.
What You'll See
$ npx @astroanywhere/agent@latest launch
Astro Agent Runner v0.2.1
+--------------------------------------------------------------+
| my-macbook (this device) |
| Apple Silicon - darwin/arm64 - v0.2.1 |
| |
| Hardware |
| CPU Apple M3 Max (16 cores) |
| RAM 128 GB (98 GB available) |
| GPU Apple M3 Max (48 GB) |
| |
| AI Agents |
| > claude-code v1.0.22 - model: sonnet-4 |
| > codex v0.1.2 |
| > openclaw v0.3.1 |
| > opencode v0.2.0 |
| |
| Runner: a1b2c3d4 |
+--------------------------------------------------------------+
Discovering SSH hosts... found 2: hpc-login, dev-vm
To authenticate, open this URL in your browser:
https://astroanywhere.com/device?code=ABCD-1234
Waiting for approval...
> Authenticated as [email protected]
> Machine "my-macbook" registered
Installing on remote hosts...
+------------------------------------------------+
| [*] hpc-login (running) |
| [email protected] |
| linux/x86_64 - 128 cores - 1024 GB RAM |
| NVIDIA A100 (80 GB) x4 |
| |
| AI Agents |
| > claude-code v1.0.22 |
| > openclaw v0.3.1 |
+------------------------------------------------+
+------------------------------------------------+
| [*] dev-vm (running) |
| [email protected] |
| linux/x86_64 - 8 cores - 32 GB RAM |
| |
| AI Agents |
| > codex v0.1.2 |
| > opencode v0.2.0 |
+------------------------------------------------+
Remote agents: 2 running, 0 failed
> Connected to relay
Ready. Listening for tasks...
Your laptop and all remote hosts appear in the Astro Dashboard. Dispatch tasks to any of them.
Remote Machines via SSH
launch reads your ~/.ssh/config, discovers reachable hosts, installs the agent runner over SSH, and starts them — all from your laptop. To set up a single remote machine manually:
ssh user@remote-host
npx @astroanywhere/agent@latest launch --no-ssh-config
Astro picks the best available machine for each task based on load and capabilities.
Installing on Slurm (HPC Clusters)
On HPC clusters, login nodes enforce strict resource limits and kill long-running processes. You have two options for installing the agent runner:
Option A — From the login node (simplest)
SSH to the login node and run setup directly. The setup process is lightweight and completes in under a minute:
ssh [email protected]
npx @astroanywhere/agent@latest launch --no-ssh-config
Option B — From a compute node (if the login node blocks it)
Request an interactive allocation first, then launch from the compute node:
ssh [email protected]
srun --time=8:00:00 --mem=4G --pty bash
npx @astroanywhere/agent@latest launch --no-ssh-config
Before running launch, install at least one AI coding agent on the cluster:
npm i -g @anthropic-ai/claude-code # Claude Code (recommended)
npm i -g @openai/codex # or Codex
npm i -g openclaw # or OpenClaw
Note: The agent runner uses Slurm to submit AI agent jobs to compute nodes automatically. Once installed, Astro dispatches tasks as Slurm jobs — you don't need to manage
sbatchyourself.
Re-setup & Force Setup
When you reinstall the agent runner on a device that was previously configured, the existing configuration (SSH hosts, authentication tokens, relay settings) is reused from the first run. This means new SSH hosts won't be discovered and stale settings won't be refreshed.
To force a full re-setup:
npx @astroanywhere/agent@latest launch --force-setup
This re-runs the entire setup flow: re-detects AI agents, re-discovers SSH hosts, re-authenticates with the relay, and updates all stored configuration. Use this when:
- You've added new SSH hosts to
~/.ssh/config - Authentication tokens have expired or changed
- You've installed or removed AI agents
- The agent runner was updated to a new version with config changes
- Remote hosts were reconfigured or replaced
You can also run setup independently without starting the agent:
npx @astroanywhere/agent@latest setup --with-ssh-config
How to Use Astro
Four ways to use Astro, from simplest to most flexible.
1. Try the Built-in Templates
The fastest way to see Astro in action. Each template is a pre-built workflow that generates a complete deliverable — just provide your inputs.
Moat & Fragility Analysis
Generate a comprehensive stock analysis report with competitive advantage scoring, moat assessment (7 Powers framework), and anti-fragility rating.
What you provide:
- Ticker symbols (e.g.,
AAPL, GOOG, MSFT) - A working directory for output files
What you get: PDF report with charts, scoring tables, and investment recommendations.
Create a new project → select "Moat & Fragility Analysis" template
→ enter tickers and output directory → Astro generates plan → execute
Academic Paper Review
Generate a structured peer review with scoring rubric, methodology critique, and improvement suggestions.
What you provide:
- A PDF of the paper (or arXiv URL)
- A working directory for the review output
What you get: Detailed review document following conference standards.
Academic Presentation
Generate conference-ready presentation slides from a research paper.
What you provide:
- A PDF of the paper (or arXiv URL)
- A working directory for the slides
What you get: Slide deck with figures, key results, and speaker notes.
Daily Paper Analysis
Generate a comprehensive digest of all HuggingFace Daily Papers for any date, plus in-depth analyses of the top-ranked papers with architecture figures and key equations.
What you provide:
- A date (YYYY-MM-DD) — weekdays typically have 24-49 papers
- Number of papers for deep analysis (default: top 10 by upvotes)
- A working directory for output files
What you get: All-papers digest (HTML + Markdown + PDF), individual deep-dive analyses with SVG architecture diagrams, and a compiled document assembling everything. All outputs are dual-format: styled HTML for direct reading and Obsidian-compatible Markdown with YAML frontmatter.
All templates run as parallel task graphs. Astro decomposes the work into independent subtasks (data collection, analysis, writing) and executes them across your machines simultaneously. See the astro-examples repo for concrete examples of planning DAGs and deliverables from the built-in templates.
2. Plan & Execute Complex Tasks
For tasks that don't fit a template — building features, refactoring codebases, writing research pipelines. Describe your goal in natural language and let Astro decompose it into a dependency graph of tasks.
The Workflow
Step 1 — Describe your goal. Write what you want to achieve in plain language. Point Astro at a repository or working directory.
"Build a REST API with authentication, CRUD endpoints for users and posts,
database migrations, and comprehensive test coverage"
Step 2 — AI generates a plan. Astro creates a dependency graph of tasks, each with a clear scope, estimated effort, and dependencies. You see the full graph before anything executes.
Step 3 — Review and refine. You can:
- Chat with the planning agent to replan — "Split the auth task into JWT and OAuth"
- Edit tasks directly — add, remove, reorder, change descriptions
- Adjust dependencies — make tasks parallel or sequential
Step 4 — Execute. Tasks dispatch to your machines in dependency order. Independent tasks run in parallel on separate git branches. You get real-time visibility into every agent session.
Step 5 — Steer mid-flight. While tasks execute, you can:
- Send guidance to a running agent — "Use Postgres instead of SQLite"
- Approve or reject tool calls and file changes
- Replan remaining tasks based on what you've seen so far
Each completed task produces a pull request (or commits, depending on your delivery mode). Review and merge at your pace.
3. Custom Projects & Task Graphs
For recurring workflows or custom pipelines, create your own projects with hand-crafted task graphs. Mix independent and dependent tasks to model any workflow.
Example: Daily Research Digest
Create a project called "Daily Research Digest" with three tasks, or click to import:
| Task | Dependencies | What it does |
|---|---|---|
| Scrape trending from X/Twitter | None | Collect today's trending topics in AI/ML |
| Get top papers from HuggingFace | None | Fetch today's top papers from Daily Papers |
| Summarize & generate report | Both above | Combine sources into a Markdown digest |
The first two tasks run in parallel (no dependencies). The summary task waits for both to finish, then produces the final report.
Example: Data Pipeline
Create a sequential pipeline where each step depends on the previous:
Collect Data → Process & Analyze → Generate Report → Done
Example: Multi-Feature Development
Create parallel feature branches that merge independently:
| Task | Dependencies | Delivery |
|---|---|---|
| Add user authentication | None | PR to main |
| Build API endpoints | None | PR to main |
| Write database migrations | None | PR to main |
| Integration tests | All three above | PR to main |
Three features develop in parallel on isolated git worktrees. The integration test task waits for all three, then runs against the combined code.
How to Create Custom Tasks
- Create a project in the Dashboard — give it a name and point it at a directory or repo
- Add tasks manually — each task has a title, description, and optional dependencies
- Draw dependencies — connect tasks in the graph view to define execution order
- Execute — Astro dispatches tasks respecting the dependency graph
4. Playground — Single Agent Sessions
For quick, one-off tasks that don't need planning. The Playground launches a direct Claude Code or Codex session on any of your registered machines.
Open Playground → pick a machine → describe what you want → go
This is equivalent to opening a terminal and running claude or codex directly, but with:
- Machine selection — run on your laptop, a GPU server, or an HPC node
- Full observability — see the session from any device, no terminal needed
- Session persistence — close the browser, come back later, the session is still there
- Steering — send messages to the agent mid-execution from the Dashboard
Use Playground when you want to:
- Quickly prototype or debug something on a specific machine
- Run a one-off analysis without creating a full project
- Test an agent on a new codebase before committing to a plan
Authentication
Key concept: Astro does not access your API keys directly. The agent runner spawns AI agents (Claude Code, Codex, etc.) as subprocesses and passes your shell environment through. Each agent handles its own authentication using its own credentials. Your keys never leave your machine and Astro never sees them.
Claude Code
Claude Code supports multiple authentication backends. Add the relevant environment variables to your shell profile (~/.zshrc on macOS, ~/.bashrc on Linux).
Anthropic cloud — OAuth token (recommended)
claude setup-token
# Then add to your shell profile:
export CLAUDE_CODE_OAUTH_TOKEN=<paste-token-here>
Anthropic cloud — API key
export ANTHROPIC_API_KEY=sk-ant-...
Amazon Bedrock
export CLAUDE_CODE_USE_BEDROCK=1
export AWS_REGION=us-west-2
# Option 1: explicit keys
export AWS_ACCESS_KEY_ID=AKIA...
export AWS_SECRET_ACCESS_KEY=...
# Option 2: AWS profile (recommended)
export AWS_PROFILE=default
Bedrock models use different model IDs (e.g.,
anthropic.claude-sonnet-4-20250514). The agent runner auto-detects Bedrock model formats and disables sandbox mode, which is not supported on Bedrock.
Google Vertex AI
export CLAUDE_CODE_USE_VERTEX=1
export CLOUD_ML_REGION=us-east5
export ANTHROPIC_VERTEX_PROJECT_ID=my-gcp-project
Third-party models via Claude Code
Claude Code can also be configured to use third-party model providers that expose an OpenAI-compatible API. This allows using models like MiniMax-M1, Kimi K2, GLM-5, or Doubao through Claude Code's interface. Set ANTHROPIC_BASE_URL, ANTHROPIC_API_KEY, and ANTHROPIC_MODEL to point at the provider:
# MiniMax
export ANTHROPIC_BASE_URL=https://api.minimax.chat/v1
export ANTHROPIC_API_KEY=<your-minimax-key>
export ANTHROPIC_MODEL=MiniMax-M1
# Kimi (Moonshot)
export ANTHROPIC_BASE_URL=https://api.moonshot.cn/v1
export ANTHROPIC_API_KEY=<your-moonshot-key>
export ANTHROPIC_MODEL=kimi-k2
# GLM (Zhipu AI)
export ANTHROPIC_BASE_URL=https://open.bigmodel.cn/api/paas/v4
export ANTHROPIC_API_KEY=<your-zhipu-key>
export ANTHROPIC_MODEL=glm-5
# Doubao (ByteDance / Volcengine ModelArk)
export ANTHROPIC_BASE_URL=https://ark.cn-beijing.volces.com/api/v3
export ANTHROPIC_API_KEY=<your-volcengine-key>
export ANTHROPIC_MODEL=<your-endpoint-id>
Experimental: Third-party model support depends on Claude Code's compatibility layer. Some features (tool use, streaming, sandbox) may not work with all providers. This is not tested by the Astro team — refer to each provider's documentation for Claude Code integration details (e.g., BytePlus ModelArk, Zhipu AI).
Troubleshooting: On remote machines or HPC clusters, session-based login (claude login) may not work. Use one of the export methods above instead.
Codex
Codex authenticates with an OpenAI API key. Add it to your shell profile:
export OPENAI_API_KEY=sk-...
Or configure it in ~/.codex/config.toml:
model = "gpt-5.3-codex"
Available Codex models (run codex -m <model_name> to switch):
| Model | Description |
|---|---|
gpt-5.3-codex |
Latest frontier agentic coding model (default) |
gpt-5.4 |
Latest frontier agentic coding model |
gpt-5.2-codex |
Frontier agentic coding model |
gpt-5.1-codex-max |
Codex-optimized flagship for deep and fast reasoning |
gpt-5.2 |
Latest frontier model with improvements across knowledge, reasoning and coding |
gpt-5.1-codex-mini |
Optimized for codex — cheaper, faster, but less capable |
OpenClaw & OpenCode
These agents support multiple model providers (OpenAI, Anthropic, Google, etc.). Configure them through their own CLI or config files — refer to each agent's documentation for details.
Commands
The agent runner provides several commands for managing your setup:
| Command | Description |
|---|---|
launch |
Setup (if needed) + start — the recommended one-command entry point |
start |
Start the agent runner (assumes setup is already complete) |
stop |
Stop the running agent process |
status |
Show current agent status, machine info, and connection state |
logs |
View agent runner logs (-f to follow, -n for line count, --host for remote) |
setup |
Run initial setup independently (detect agents, authenticate, configure relay) |
auth |
Set or clear Claude OAuth token for agent SDK authentication |
config |
Show, modify, reset, or import configuration |
providers |
List detected AI agent providers on this machine |
resources |
Show machine hardware (CPU, memory, GPU) |
hosts |
Discover remote hosts from SSH config |
connect |
Alias for start --foreground — run in the current terminal |
mcp |
Start MCP server for Claude Code integration (stdio mode) |
Common Options
launch supports all setup and start options in one command:
# Force re-setup + start in foreground
npx @astroanywhere/agent@latest launch --force-setup -f
# Skip SSH discovery (local-only mode)
npx @astroanywhere/agent@latest launch --no-ssh-config
# Skip remote host launching (setup SSH but only run local)
npx @astroanywhere/agent@latest launch --no-launch-all
# Non-interactive mode (for scripts and batch jobs)
npx @astroanywhere/agent@latest launch --non-interactive --no-ssh-config
start controls the runtime behavior:
# Run in foreground with debug logging
npx @astroanywhere/agent@latest start -f --log-level debug
# Limit concurrent tasks
npx @astroanywhere/agent@latest start --max-tasks 2
# Keep worktrees after task completion (for debugging)
npx @astroanywhere/agent@latest start --preserve-worktrees
logs helps with troubleshooting:
# Follow logs in real time
npx @astroanywhere/agent@latest logs -f
# Filter logs for a specific task
npx @astroanywhere/agent@latest logs -f | grep "taskId"
# Show last 100 lines
npx @astroanywhere/agent@latest logs -n 100
# View logs from a remote host
npx @astroanywhere/agent@latest logs --host hpc-login
Key Features
1. Planning & Parallel Execution
Describe what you want to build. Astro decomposes your goal into a dependency graph (DAG) of tasks, then executes them in parallel across your machines — respecting the dependency order automatically.
A complex feature that would take hours of serial work gets broken into independent subtasks. Tasks without dependencies run simultaneously on separate git branches. Dependent tasks wait only for their upstream inputs, not for unrelated work to finish.
Tasks A, B, C run in parallel. Task D waits for A + B. Task E waits for C.
Total time = longest path, not sum of all tasks.
2. Supporting Mainstream AI Agents
Astro works with the AI coding agents you already use. Install any supported agent — Astro detects it at startup and dispatches tasks automatically.
| Agent | Install | Website |
|---|---|---|
| Claude Code | npm i -g @anthropic-ai/claude-code |
anthropic.com/claude-code |
| Codex | npm i -g @openai/codex |
github.com/openai/codex |
| Pi | npm i -g @mariozechner/pi-coding-agent |
github.com/badlogic/pi-mono |
| OpenClaw | npm i -g openclaw |
github.com/openclaw-ai/openclaw |
| OpenCode | bun i -g opencode |
github.com/opencode-ai/opencode |
Pi is the coding agent that powers OpenClaw. If you have OpenClaw installed, Pi is already available — Astro detects it automatically. You can also install Pi standalone for direct access to its multi-provider model support (Anthropic, OpenAI, Google, Bedrock, custom providers via models.json).
All agents get full project context injection, real-time output streaming, and session preservation for multi-turn resume. Your API keys stay on your machine — Astro never sees them.
3. GitHub-Native Workflow
Every task runs on its own git worktree — a real, isolated branch with no conflicts. When the agent finishes, the runner commits the changes, pushes the branch, and opens a pull request automatically.
No merge conflicts between parallel tasks. Each branch is isolated. Review and merge at your own pace.
4. Mission Control & Full Observability
The Astro Dashboard gives you full visibility into every agent session, tool execution, and file change across all your projects and machines:
- Observe — see the full dialogue of AI agents, every tool call, every file diff, in real time
- Steer — send guidance or redirect agents mid-execution
- Decide — approve, reject, or rerun from any device — no terminal needed
- Scale — multi-machine routing by load and capability
Architecture
Astro Server generates plans, breaks them into tasks, and dispatches to agent runners. Each Agent Runner (this repo) selects an AI agent, deploys jobs to compute backends, and streams progress back to the server.
Integration in OpenClaw
Monitor and control Astro directly inside OpenClaw. View project status, track running tasks, steer agents, and approve results — all from the OpenClaw interface.
This integration is under development. Details coming soon.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi