xreview
Health Gecti
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 12 GitHub stars
Code Gecti
- Code scan — Scanned 6 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
This tool is an agent-native code review engine for Claude Code and Codex CLI. It orchestrates a three-party review loop where a secondary AI model reviews your code, your primary agent verifies the findings, and you make the final decision on what to fix.
Security Assessment
The tool inherently interacts with your codebase and sends code to external AI models (OpenAI) for analysis, meaning it handles potentially sensitive source code and requires an OpenAI API key. It automatically executes shell commands via its curl-based installation script and interacts with system files. However, the automated code scan found no dangerous patterns, no hardcoded secrets, and no excessive permission requests. The tool acts strictly as a prompt/skill orchestration layer rather than containing complex native logic.
Overall Risk Rating: Low. (While it ships code externally by design to function, it lacks malicious patterns or hidden data harvesting).
Quality Assessment
The project is in the early stages of community adoption with 12 GitHub stars, but it is actively maintained (last pushed 0 days ago). It meets standard health checks by including a clear description and an open-source MIT license. The provided documentation is comprehensive and transparent about exactly how the multi-agent workflow operates.
Verdict
Safe to use. It is a lightweight and transparent orchestration skill, though users should be inherently comfortable sharing their source code with OpenAI's API.
Agent-native code review engine for Claude Code — delegates review to Codex for independent three-party consensus
xreview
Agent-native code review engine for Claude Code and Codex CLI, powered by Codex.
xreview delegates code review to Codex (a separate AI model) so your coding agent gets an independent second opinion. It orchestrates a three-party review loop: Codex reviews, your agent verifies, you decide.
Curious why this exists? I wrote a post about the motivation and how it works: Why I built xreview
How It Works
When you ask your coding agent to review your code, the xreview skill takes over:
- Codex reviews your code and reports findings (bugs, security issues, logic errors)
- Your agent verifies each finding independently — reads the actual source code, confirms or challenges false positives by discussing with Codex
- Your agent presents a Fix Plan with only verified findings — trigger, impact, cascade, and fix options
- You decide — approve all recommended fixes, pick by severity, or adjust per finding
- Your agent fixes strictly per your approved plan
- Codex verifies the fixes in a follow-up round, may find new issues or reopen dismissed ones
- Repeat until all parties agree (or 5 rounds max)
- Summary — your agent produces a detailed verbal summary of all findings, decisions, and fixes
This isn't your agent reviewing its own work. It's a genuinely independent review from a different model, with your agent acting as a verification layer that filters out false positives before presenting to you.
Installation
Claude Code
Register the marketplace and install:
/plugin marketplace add davidleitw/xreview
/plugin install xreview@xreview-marketplace
Codex CLI
Paste this to your Codex CLI session:
Fetch and follow instructions from https://raw.githubusercontent.com/davidleitw/xreview/master/.codex/INSTALL.md
Or install manually:
# Install binary
curl -fsSL https://raw.githubusercontent.com/davidleitw/xreview/master/scripts/install.sh | bash
# Install skill
mkdir -p ~/.agents/skills/xreview
curl -fsSL -o ~/.agents/skills/xreview/SKILL.md https://raw.githubusercontent.com/davidleitw/xreview/master/.agents/skills/xreview/SKILL.md
Prerequisites
- Codex CLI installed and authenticated (
npm install -g @openai/codex) - OpenAI API key configured for Codex
Usage
Just ask your coding agent to review:
Use xreview to check my code for bugs and security issues
Or be specific about which files:
Use xreview to review store/db.go and handler/exec.go for security vulnerabilities
The xreview skill triggers automatically. In Claude Code, you can also invoke it directly with /xreview.
What It Catches
| Category | Examples |
|---|---|
| Security | SQL injection, command injection, hardcoded secrets, missing auth |
| Logic | Nil pointer dereference, race conditions, off-by-one errors |
| Error Handling | Ignored errors, resource leaks, unclosed connections |
| Performance | N+1 queries, unnecessary allocations |
Language-Specific Review
xreview supports language-aware review via --language. When the skill detects that review targets are written in a supported language, it automatically adds language-specific guidelines to the Codex prompt.
| Language | Key | Guidelines |
|---|---|---|
| C++ | cpp |
ISO C++ Core Guidelines — memory safety, UB, concurrency, exception safety, ownership, class design |
| Go | go |
Effective Go + Go Code Review Comments — goroutine safety, data race, resource leak, error handling, concurrency patterns |
Unsupported languages fall back to general-purpose review (same behavior as without the flag).
The Three-Party Loop
Each finding goes through a structured analysis:
F-001: SQL Injection (security/high)
store/db.go:34 — FindUser()
Trigger: user sends malicious string via /user?name=' OR '1'='1
Root cause: fmt.Sprintf concatenates user input directly into SQL query
Impact: attacker can read, modify, or delete any data in the database
-> Fix: changed to parameterized query db.Query("...WHERE name = ?", name)
- All findings presented at once — you see the full picture before any code changes
- Multiple fix options per finding — your agent lists alternatives with effort levels; you pick
- Every finding includes "Don't fix" — you always have the final say
After all findings are addressed, Codex verifies the fixes. If it disagrees with a dismissal or finds an incomplete fix, the loop continues.
Auto-Update
xreview keeps itself up to date automatically. During preflight (the first step of every review), it checks GitHub Releases for a newer version. The check is cached locally for 24 hours to avoid slowing things down.
When a new version is available, the skill runs xreview self-update before proceeding. The update downloads a pre-built binary matching your OS and architecture — no Go toolchain required. If the update fails for any reason, xreview continues with the current version.
You can also update manually:
xreview self-update
CLI Reference
xreview ships as a standalone Go binary that your coding agent calls under the hood:
| Command | Purpose |
|---|---|
xreview preflight |
Check environment (codex installed, API key, version, updates) |
xreview review --files <paths> |
Run initial review |
xreview review --files <paths> --language <key> |
Review with language-specific guidelines (cpp, go) |
xreview review --session <id> --message "..." |
Resume for verification round |
xreview clean --session <id> |
Clean up a session |
xreview clean --all |
Clean up all sessions |
xreview self-update |
Update to the latest version from GitHub Releases |
xreview version |
Show version |
Development
git clone https://github.com/davidleitw/xreview.git
cd xreview
go build -o xreview ./cmd/xreview/
To load the plugin locally in Claude Code (without installing from marketplace):
claude --plugin-dir .
This loads skills/ from the repo root via .claude-plugin/plugin.json. Use /reload-plugins inside the session to hot-reload after editing skill files.
Architecture
Host Agent xreview (CLI) Codex (reviewer)
(Claude Code / Codex CLI)
| | |
|-- review request ------->| |
| |-- codex exec --------->|
| | (Codex reads code |
| | via git diff/files) |
| |<-- findings (JSON) ----|
| | [snapshot file |
| | checksums] |
|<-- findings (XML) ------| |
| | |
| [verify each finding] | |
| [challenge suspects] -->|-- codex resume ------->|
| |<-- re-evaluate --------|
| | |
| [present Fix Plan] | |
| [user approves] | |
| [fix code] | |
| | |
|-- resume --------------->| [detect changed files |
| | via checksum diff] |
| |-- codex resume ------->|
| | (prompt includes |
| | changed file list) |
| |<-- verify (JSON) ------|
|<-- verify (XML) --------| |
| | |
| [verbal summary] | |
|-- clean ---------------->| |
- xreview outputs XML on stdout for skill consumption
- Codex fetches code itself (runs
git diffor reads files in read-only mode) - Your coding agent independently verifies each finding before presenting to user
- Session state stored as JSON in
/tmp/xreview/sessions/(ephemeral) - Multi-round: codex session resume via
--session <session-id> - File snapshot (SHA-256 checksums) tracks changes between rounds — xreview detects which files changed and tells Codex to re-read them, ensuring reviews always evaluate the latest code
Future Work
See Roadmap & Design for the full plan. Key directions:
- Context engineering — structured context files and focused review angles, letting your agent prepare architectural context (symbol cross-references, call chains, data structure shapes) before Codex reviews. Solves the "semantic gap" where code works correctly but communicates intent poorly.
- Multi-angle review — dispatch multiple parallel Codex reviews, each focused on a different concern (semantic consistency, lifecycle naming, bugs/security), then merge and deduplicate findings. Your agent decides when multi-angle is warranted based on code complexity.
- Design plan review — review implementation plans and design docs before execution, checking for feasibility issues, missing edge cases, and architectural conflicts with existing code.
- Multi-model review — run the same code through independent reviewers (Codex, Gemini, local models) and cross-validate findings. Different models have different blind spots; cross-model consensus yields higher-confidence findings.
- More language-specific guidelines —
--languagecurrently supports C++ and Go. More languages (Rust, TypeScript, Python) planned. - Auto-fix mode — fully autonomous review-and-fix cycle for vibe coding workflows, requiring zero user interaction until completion.
Uninstall
Claude Code
/plugin uninstall xreview
Codex CLI
rm -rf ~/.agents/skills/xreview
Clean up binary and cached data
# Remove binary (check which location applies)
rm "$(which xreview)"
# Remove version cache
rm -rf ~/.cache/xreview
# Remove session data (optional, stored in /tmp)
rm -rf /tmp/xreview
License
MIT License — see LICENSE for details.
Support
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi