relay
Health Pass
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 19 GitHub stars
Code Pass
- Code scan — Scanned 6 files during light audit, no dangerous patterns found
Permissions Pass
- Permissions — No dangerous permissions requested
Relay is a local Rust-based agent that transfers your full session context from one AI coding assistant to another (e.g., Claude to Codex) so you can seamlessly continue working when you hit a rate limit.
Security Assessment
This tool operates entirely through local file parsing, reading Claude's `.jsonl` session transcripts to capture conversation history and tool calls. It does not make external network requests and claims "zero network capture." No dangerous permissions or hardcoded secrets were found during the scan. The code scan of 6 files found no dangerous patterns. Since it reads local session transcripts—which can contain your code, file paths, and potentially sensitive project details—you should be aware it accesses this data. However, it keeps everything local and doesn't exfiltrate data. Overall risk: Low.
Quality Assessment
The project is fresh (last push was today) but appears well-structured with a clear MIT license. It's a compiled Rust binary (~4.6 MB) with no runtime dependencies, which reduces supply chain risks. At 19 GitHub stars, the community is small but present. The README is thorough and professional, detailing features, usage, and context control options. The tool is distributed via both npm and GitHub releases. Being relatively new with a small user base means it hasn't undergone widespread community scrutiny yet.
Verdict
Safe to use — it's a local-only, MIT-licensed tool with no network activity, no dangerous permissions, and a clean code scan, though the small community means you should monitor for ongoing maintenance.
When Claude Code hits its rate limit, Relay hands off your full session context to Codex, Gemini, Aider, or 5 other agents — so your work never stops.
Relay
When Claude Code hits its rate limit, another agent picks up exactly where you left off — with full conversation context.

Features
- Full conversation capture — Reads Claude's actual
.jsonltranscript, not just git - 8 agent adapters — Codex, Claude, Aider, Gemini, Copilot, OpenCode, Ollama, OpenAI
- Interactive TUI — Spinners, progress steps, fuzzy agent picker, color-coded output
relay resume— When Claude comes back, see what the fallback agent didrelay history— Browse all past handoffs with timestampsrelay diff— Show exactly what changed during the handoff- Clipboard mode —
--clipboardcopies handoff for pasting into any tool - Handoff templates —
--template minimal|full|rawfor different formats - Rate limit auto-detection — PostToolUse hook triggers handoff automatically
- Context control —
--turns 10 --include git,todosto customize - Zero network capture — Pure local file parsing, < 100ms
- 4.6 MB binary — Rust, no runtime, no GC
The Problem
It's 6:20 PM. Your submission is at 7 PM. You're deep in a Claude Code session — 45 minutes of context, decisions, half-finished code. Then:
Rate limit reached. Please wait.
Your entire session context is gone. You open Codex or Gemini and spend 20 minutes re-explaining everything. By the time you're set up, it's 6:50.
The Solution
relay handoff --to codex
Relay reads your actual Claude Code session — the full conversation, every tool call, every file edit, every error — compresses it into a handoff package, and opens Codex (or Gemini, Aider, Ollama, etc.) with complete context. The new agent knows exactly what you were doing and waits for your instructions.
What Relay Captures
This is NOT just git state. Relay reads Claude's actual .jsonl session transcript:
════════════════════════════════════════════════════════
📋 Session Snapshot
════════════════════════════════════════════════════════
📁 /Users/dev/myproject
🕐 2026-04-05 14:46
🎯 Current Task
──────────────────────────────────────────────────
Fix the mobile/desktop page separation in the footer
📝 Progress
──────────────────────────────────────────────────
✅ Database schema + REST API
✅ Landing page overhaul
🔄 Footer link separation (IN PROGRESS)
⏳ Auth system
🚨 Last Error
──────────────────────────────────────────────────
Error: Next.js couldn't find the package from project directory
💡 Key Decisions
──────────────────────────────────────────────────
• Using Socket.io instead of raw WebSockets
• Clean reinstall fixed the @next/swc-darwin-arm64 issue
💬 Conversation (25 turns)
──────────────────────────────────────────────────
🤖 AI Now update the landing page footer too.
🔧 TOOL [Edit] pages/index.tsx (replacing 488 chars)
📤 OUT File updated successfully.
🤖 AI Add /mobile to the Layout bypass list.
🔧 TOOL [Edit] components/Layout.tsx (replacing 99 chars)
🔧 TOOL [Bash] npx next build
📤 OUT ✓ Build passed — 12 pages compiled
The fallback agent sees everything: what Claude was thinking, what files it edited, what errors it hit, and where it stopped.
8 Supported Agents
════════════════════════════════════════════════════════
🤖 Available Agents
════════════════════════════════════════════════════════
Priority: codex → claude → aider → gemini → copilot → opencode → ollama → openai
✅ codex Found at /opt/homebrew/bin/codex
✅ copilot Found at /opt/homebrew/bin/copilot
❌ claude Install: npm install -g @anthropic-ai/claude-code
❌ aider Install: pip install aider-chat
❌ gemini Set GEMINI_API_KEY env var
❌ opencode Install: go install github.com/opencode-ai/opencode@latest
❌ ollama Not reachable at http://localhost:11434
❌ openai Set OPENAI_API_KEY env var
🚀 2 agents ready for handoff
| Agent | Type | How it launches |
|---|---|---|
| Codex | CLI (OpenAI) | Opens interactive TUI with context |
| Claude | CLI (Anthropic) | New Claude session with context |
| Aider | CLI (open source) | Opens with --message handoff |
| Gemini | API / CLI | Gemini CLI or REST API |
| Copilot | CLI (GitHub) | Opens with context |
| OpenCode | CLI (Go) | Opens with context |
| Ollama | Local API | REST call to local model |
| OpenAI | API | GPT-4o / GPT-5.4 API call |
Quick Start
# Install
git clone https://github.com/Manavarya09/relay
cd relay && ./scripts/build.sh
# Symlink to PATH (avoids macOS quarantine)
ln -sf $(pwd)/core/target/release/relay ~/.cargo/bin/relay
# Generate config
relay init
# Check what agents you have
relay agents
# See your current session snapshot
relay status
# Hand off to Codex (interactive — opens TUI)
relay handoff --to codex
# Interactive agent picker
relay handoff
# With deadline urgency
relay handoff --to codex --deadline "7:00 PM"
# Copy to clipboard instead
relay handoff --clipboard
# Minimal handoff (just task + error + git)
relay handoff --template minimal --to codex
# When Claude comes back — see what happened
relay resume
# List all past handoffs
relay history
# What changed since handoff?
relay diff
Context Control
# Default: last 25 conversation turns + everything
relay handoff --to codex
# Light: 10 turns only
relay handoff --to codex --turns 10
# Only git state + todos (no conversation)
relay handoff --to codex --include git,todos
# Only conversation
relay handoff --to codex --include conversation
# Dry run — see what gets sent without launching
relay handoff --dry-run
How It Works
- Reads
~/.claude/projects/<project>/<session>.jsonl— Claude's actual transcript - Extracts user messages, assistant responses, tool calls (Bash, Read, Write, Edit), tool results, errors
- Reads TodoWrite state from the JSONL (your live todo list)
- Captures git branch, diff summary, uncommitted files, recent commits
- Compresses into a handoff prompt optimized for the target agent
- Launches the agent interactively with inherited stdin/stdout
Config
~/.relay/config.toml:
[general]
priority = ["codex", "claude", "aider", "gemini", "copilot", "opencode", "ollama", "openai"]
max_context_tokens = 8000
auto_handoff = true
[agents.codex]
model = "gpt-5.4"
[agents.gemini]
api_key = "your-key"
[agents.openai]
api_key = "your-key"
[agents.ollama]
url = "http://localhost:11434"
model = "llama3"
Auto-Handoff (PostToolUse Hook)
Add to ~/.claude/settings.json:
{
"hooks": {
"PostToolUse": [
{ "matcher": "*", "hooks": [{ "type": "command", "command": "relay hook" }] }
]
}
}
Relay detects rate limit signals in tool output and automatically hands off.
Performance
- 4.6 MB binary
- < 100ms session capture
- Zero network calls for capture
- Rust — no runtime, no GC
License
MIT
Built by @masyv
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found