claude-human-review
Health Uyari
- No license — Repository has no license file
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 6 GitHub stars
Code Basarisiz
- rm -rf — Recursive force deletion command in scripts/approve.sh
Permissions Gecti
- Permissions — No dangerous permissions requested
This tool is a Claude Code plugin that acts as a human-in-the-loop gate. It automatically intercepts file edits made by the AI, provides a brief plain-English summary of the changes, and prompts the user to either approve the edits or completely undo them before proceeding.
Security Assessment
Overall Risk: Medium. The tool operates by executing shell commands to create file snapshots and revert changes. A major concern is the presence of a recursive force deletion command (`rm -rf`) found in the `approve.sh` script. While this is likely used to clean up temporary snapshot directories, it poses a destructive risk if a path variable is ever empty or malformed. There are no hardcoded secrets detected, and the tool does not request inherently dangerous broad permissions, though its core function requires extensive local filesystem access. No unexpected network requests were flagged.
Quality Assessment
The project is highly active, with its last code push occurring very recently. However, it currently suffers from extremely low community visibility, having only 6 GitHub stars, which means it has undergone minimal public security auditing. There is also a conflicting audit flag regarding its license: the automated scan found no official license file, even though the README explicitly claims it is released under the MIT license.
Verdict
Use with caution. The concept is highly useful for AI safety, but the destructive shell commands and lack of peer review mean you should inspect the codebase yourself before relying on it.
Claude Code plugin that reviews every edit in plain English so you approve or undo with full context.
claude-human-review
A plugin that helps you actually understand the code Claude Code writes for you. After every turn that edits files, Claude pauses, drops short doc comments on the classes and functions it touched (JSDoc for TS/JS, docstrings for Python, and so on), then explains what changed and why in plain English. You either approve the changes or undo them. The review itself is tuned to stay short so the token cost is minimal.
Keywords: claude code review, claude code approval, approve claude changes, undo claude changes, AI code review, human-in-the-loop, claude code plugin, claude code hooks.
Features
- Stop-hook gate: blocks Claude from ending any turn that edited files until a review is shown.
- Plain-English review: 60 to 120 words covering what changed, why, and any hidden risks.
- Inline doc comments: drops one-line JSDoc, docstrings,
//, etc. on new or changed classes and functions, and skips trivial helpers and tests. - Approve / Undo prompt: reply
1or2. Approve keeps the change, Undo reverts every file this turn. - Snapshot-based undo: restores from on-disk snapshots, works in any directory (git or not), survives context compaction.
- New-file cleanup: Undo deletes files Claude created, not just reverts them.
- Risk flagging: auth, SQL, crypto, migrations, new deps, and unsnapshotted files get called out in a "Worth a second look" section.
- Read-only turns skip: Read, Bash, Grep, and Glob-only turns never interrupt you.
- Kill switch:
HUMAN_REVIEW_DISABLED=1 claudeturns the gate off without uninstalling.
Contents
- Install
- Why this exists
- Heads up: token usage
- What you'll see
- How the review differs from a default Claude summary
- How to use it
- How it works under the hood
- Why not just use CLAUDE.md?
- State and storage
- Repository layout
- Behavior details and edge cases
- Limitations
- Contributing
- License
Install
Option A: install from GitHub (recommended)
Two slash-commands inside Claude Code:
/plugin marketplace add IrtezaAsadRizvi/claude-human-review
/plugin install claude-human-review@claude-human-review
The first command registers this repo as a plugin marketplace. The second installs the plugin. Restart Claude Code once after installing so the hooks register, then the review gate fires on the next turn that edits files.
To pull updates later:
/plugin marketplace update claude-human-review
Option B: install from a local clone (for development)
If you want to hack on the plugin yourself, clone the repo and point Claude Code at your local copy:
git clone https://github.com/IrtezaAsadRizvi/claude-human-review.git
Then inside Claude Code:
/plugin marketplace add /absolute/path/to/claude-human-review
/plugin install claude-human-review@claude-human-review
Edits to skills/human-review/SKILL.md and the hook scripts take effect on the next Claude Code restart.
Option C: Claude Code official plugin marketplace
Submitted to Anthropic's community marketplace. Pending approval. Once live:
/plugin marketplace add claude-plugins-official
/plugin install claude-human-review@claude-plugins-official
Requirements
- Claude Code (any version with plugin and hook support).
- Python 3.9 or newer on PATH. The hooks are pure Python, no third-party dependencies.
- Bash for the approve and undo helper scripts.
Why this exists
Claude Code writes code fast. Faster than you can read it. If you work that way for a few weeks, you end up sitting on a codebase you kind of wrote, kind of didn't, and don't fully understand. That's painful when something breaks, when a teammate asks why a piece of code is shaped the way it is, when an auditor wants a walkthrough, or when you come back six months later to extend it.
The goal of this project is simple: make the human developer more educated about the code going into their project.
Every review is a short lesson. Here's what changed. Here's why. Here's what to watch out for. You read it, you decide, you move on. Over time that adds up, and you end up fluent in your own codebase instead of a passenger in it.
Approve keeps the change. Undo throws it out. Either way, you saw it.
Heads up: token usage
The plugin is tuned to stay cheap. The skill targets 60–120 words per review and drops sections when they'd be filler, so the output overhead per turn is small. Inline doc comments follow the same rule: one short line per symbol, only on classes, functions, exported symbols, and non-obvious config blocks, and only where the purpose isn't already obvious from the name. No JSDoc spam, no docstrings for three-line helpers.
You're paying a handful of extra tokens to actually understand what Claude just did to your code, and to get a real Undo button if you don't like it. For serious AI-human collaboration, that's cheap.
If you want to claw tokens back on the input side, caveman compresses prompts. Pairs well with a plugin that already keeps output lean.
What you'll see
When Claude finishes a turn that edited files, your terminal shows something like this:
## Review: what I just did
**What changed**
- `src/auth/session.py`: session cookie SameSite=Lax → Strict, added Secure flag.
- `tests/test_auth.py`: two cases for the new flags; fixture expects `secure=True`.
**Why**
Read "tighten cookie handling" as CSRF hardening. Strict will break cross-site embeds; I assumed you have none.
**Worth a second look**
- Strict breaks OAuth redirects landing on your domain.
- Secure drops cookies on plain-HTTP localhost dev.
1. Approve: accept these changes.
2. Undo: revert all files this turn.
Reply 1 to keep it or 2 to roll it back.
Alongside the review, Claude adds a one-line doc comment above any class, function, or config block it created or materially changed. JSDoc in TS/JS, docstrings in Python, // in Go, and so on. Example, in a file Claude just touched:
/** Validates a JWT and returns the decoded claims, or null if it's expired or signed with the wrong key. */
export function verifySession(token: string): Claims | null { ... }
Trivial helpers, tests, and anything already documented are left alone, so the diff stays small.
How the review differs from a default Claude summary
Claude already recaps its work at the end of a turn if it feels like it. This review is a different artifact, on purpose.
| Default Claude recap | This plugin's review | |
|---|---|---|
| Shape | Freeform prose, varies turn to turn | Fixed template: What changed / Why / Worth a second look |
| Length | Anywhere from one line to a full page | 60 to 120 words, hard cap 200 |
| Per-file detail | Often re-narrates each chunk | One line per file, goal-level, 12 words max |
| Risks | Mentioned only if they came up while coding | Dedicated section, targets auth, SQL, crypto, new deps, migrations, unsnapshotted files |
| Invisible risks | Usually missed | Explicitly called out: dynamic imports, loosened checks, swallowed exceptions |
| Ending | Turn just ends, you decide what to do | Forced 1. Approve / 2. Undo choice |
| Guaranteed? | No, Claude may skip it | Yes, enforced by the Stop hook |
The goal is a fixed-shape artifact you can scan in under 30 seconds and decide keep-or-revert, not a narrative of what was done.
How to use it
There's nothing to do. Once installed, the plugin runs itself. From your side it looks like this:
- You prompt Claude as normal. Something like "refactor the auth middleware to use JWT."
- Claude edits files as normal. You don't see anything different while it works.
- At the end of the turn Claude pauses and writes the review shown above.
- You reply:
1orapprove: changes stick, session state clears, you're done.2orundo: every file edited this turn gets reverted. Newly created files get deleted.- Anything else is treated as a fresh prompt. The previous turn is implicitly accepted and the plugin starts tracking the new one.
What counts as a change
| Tool Claude used | Triggers a review? |
|---|---|
Edit, Write, NotebookEdit |
Yes |
Read, Bash, Grep, Glob |
No |
Read-only exploration turns never interrupt you. Only turns that actually changed your files do.
Turn it off temporarily
Set an environment variable before launching Claude Code:
HUMAN_REVIEW_DISABLED=1 claude
The Stop hook honors this flag and lets Claude stop silently. Snapshots and edit logs still get written harmlessly, so you can flip the flag back on mid-session if you change your mind.
How it works under the hood
Three Claude Code hooks coordinate through a per-session state directory.
PreToolUse (Edit, Write, NotebookEdit): snapshot.py copies the original file
PostToolUse (Edit, Write, NotebookEdit): track_edits.py logs the edit
Stop (end of turn): review_gate.py blocks stop, injects review prompt
When the Stop hook fires with a non-empty edit log, it blocks Claude from stopping and injects a prompt telling Claude to load the human-review skill. That skill contains the template, tone rules, and example Claude uses to write the review. Your 1 or 2 reply routes to two small Bash helpers (scripts/approve.sh and scripts/undo.sh) that do the actual state mutation.
Undo uses filesystem snapshots, not git, so it works the same way in any directory: git repo, non-repo, or a folder full of untracked files.
Why not just use CLAUDE.md?
Fair question, and the obvious one. You could drop something like "after every edit, summarize what you did and ask for approve or undo" into your CLAUDE.md, and Claude would try to follow it. It works, sometimes. Here's why the plugin exists anyway.
CLAUDE.md gets read. Hooks get executed.
CLAUDE.md is context Claude reads, weighed against your actual prompt. If your prompt is long or urgent, Claude can deprioritize the review instruction and just stop. There's no enforcement. The Stop hook here literally prevents Claude from ending the turn until the review is out. Deterministic, not probabilistic.
Undo needs snapshots, not memory.
To undo a set of edits you need the pre-edit contents of each file. A CLAUDE.md approach asks Claude to remember those contents and rewrite the files back. That breaks down a lot. Claude's context gets compacted mid-session and the old contents disappear. Claude does a Write without Reading first, so there's nothing to remember. Claude's rewrite of the "original" introduces its own bugs. This plugin snapshots each file to disk before the edit happens, so undo is a deterministic file restore, not another AI rewrite.
Compaction is the silent killer.
Long sessions trigger context compaction. Your pre-edit state, the early part of the conversation, the file contents Claude read two hours ago, all of it gets summarized into something shorter. If your "remember and revert" policy lives in Claude's head, it doesn't survive compaction. The plugin's edit log and snapshots live on disk, so they're immune.
The line is harness vs memory.
The Claude Code harness runs hooks for you. They fire on tool events and can't be talked out of firing. CLAUDE.md is just context Claude reads. Anytime you want a behavior to happen reliably, at a specific event, regardless of what Claude feels like doing that turn, you need a hook. That's this plugin's whole pitch.
If you're a solo dev on low-stakes code with short sessions, CLAUDE.md alone is probably fine. For anything where the review needs to actually happen every time and undo needs to actually work, the hooks are doing real work that instructions can't.
State and storage
Each Claude Code session gets its own state directory, rooted at the project where you ran Claude.
<cwd>/.claude/human-review/<session_id>/
├── snapshots/ one JSON per edited file, holding original contents
├── edit_log.jsonl one line per successful edit
└── review_shown.flag prevents the Stop hook from looping
Add .claude/human-review/ to your .gitignore. It's ephemeral, turn-by-turn state, not source of truth. Dead sessions get auto-purged after 30 days.
Repository layout
claude-human-review/
├── .claude-plugin/plugin.json plugin manifest
├── hooks/
│ ├── hooks.json Pre/Post/Stop hook wiring
│ ├── _common.py shared state-dir and snapshot helpers
│ ├── snapshot.py PreToolUse: snapshot original file
│ ├── track_edits.py PostToolUse: log successful edits
│ └── review_gate.py Stop: block and inject review prompt
├── skills/
│ └── human-review/SKILL.md how Claude writes the review and handles 1/2
├── scripts/
│ ├── approve.sh clear session state
│ └── undo.sh restore snapshots, delete created files
└── README.md
Behavior details and edge cases
- Multiple edits to the same file in one turn. Snapshots are first-edit-wins, so undo always restores the true pre-turn baseline.
- New-file creation. If Claude creates
foo.py, undo deletes it. - Binary or large files (over 1 MB). Not snapshotted, because it's expensive. The review flags them and undo will warn that those specific files can't be reverted. Current contents stay in place.
- Failed edits. Tools that errored or were denied write no log entry. No orphan review.
- Ignoring the review. If you send a fresh prompt without answering, the previous turn is treated as implicitly approved, state clears, and the new turn gets its own review.
- Subagent edits. Not gated in v0.1. We hook
Stop, notSubagentStop. Edits from subagents slip through.
Limitations
- Review quality depends on Claude. If the summaries feel shallow, tune
skills/human-review/SKILL.md. The plugin code is stable. The prompt is the knob. - Subagent blind spot as noted above.
- Large text files near the 1 MB cap make snapshot and undo slow, since snapshots are JSON-wrapped text.
- Windows paths aren't tested. Should work, but I haven't verified it.
Contributing
PRs welcome, especially for:
- Better review examples in
SKILL.md(prompt engineering, not code). - Windows support.
SubagentStopcoverage.- Compression for snapshots on large text files.
Keep it simple. The whole point is a plugin you can read end-to-end in one sitting.
License
MIT. Use it, fork it, ship it.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi