no-no-debug
Health Pass
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 35 GitHub stars
Code Pass
- Code scan — Scanned 4 files during light audit, no dangerous patterns found
Permissions Pass
- Permissions — No dangerous permissions requested
This tool is a self-evolution system and memory layer for AI coding assistants. It automatically logs coding errors, tracks bug patterns, and enforces pre-change verification gates to prevent AI assistants from repeating the same mistakes across sessions.
Security Assessment
Overall Risk: Low. The light code audit scanned four files and found no dangerous patterns, hardcoded secrets, or requests for risky permissions. The system operates entirely as a local framework: it reads and writes to local Markdown files (`error_log.md`, `error_tracker.md`) for memory tracking and logs AI hooks. The tool does not appear to make external network requests or execute unauthorized shell commands. The README notes that a critical bug in v1.2.0 was recently patched to properly handle standard input (stdin JSON) and environment variables, which indicates active maintenance regarding safe data handling.
Quality Assessment
Quality is good. The project is actively maintained, with the most recent code push occurring today. It is properly licensed under the standard MIT license and has a clear, detailed description. Community trust is currently in its early stages but shows positive traction with 35 GitHub stars. The developer is transparent about past bugs and provides clear documentation and migration paths for fixes.
Verdict
Safe to use.
No-No Debug — Self-evolution system for AI coding assistants. 10 minutes writing code, 2 hours debugging? This skill makes your AI remember all its bugs.
no-no-debug
A self-evolution system for AI coding assistants
v1.2.0 — critical hook fix. Earlier releases shipped a
settings.json
template that referenced$CLAUDE_TOOL_NAME/$CLAUDE_USER_PROMPT
environment variables that don't exist — every install was silently writing
emptyTOOL_FAIL | failedlog lines and never capturing real corrections.
Hook context is delivered via stdin JSON. v1.2.0 replaces the template
with two small standalone scripts underhooks/that parse stdin correctly,
strip injected XML context blocks to kill false triggers, and fix Python's
CJK word-boundary edge case. See CHANGELOG.md for the
migration path from v1.1.0.
What problem does this solve?
10 minutes writing code, 2 hours debugging.
This skill fills the gap in AI's cross-session error memory, delivering three things:
- Dramatically less debug time
- Higher code quality
- A self-evolution feedback loop
The longer you use it, the fewer mistakes get repeated.
How it works
1. Real-time Logging (automatic)
When the AI is corrected, code errors occur, deploys fail, or tests don't pass — it automatically appends to a local error_log.md with a timestamp. No need to say "write that down."
2. Three Gates (on every code change)
Before the change: what does this affect?
After the change: did you actually verify it?
Before deploying: did you test with a non-admin account?
Runs silently. No output when all gates pass.
3. Periodic Review (auto-triggers every 3 days)
Reads error_log.md, categorizes by dimension, updates error_tracker.md, outputs an evolution report.
Review frequency is configurable: 1 day / 3 days (default) / 7 days.
4. Rule Accumulation
New error type → automatically creates a prevention rule.
Repeated offense → counter increments, rule strengthens.
4 consecutive clean periods → marked as cured.
Rules persist across sessions — nothing gets lost.
5. Confirmation Gate
The following situations require user confirmation before proceeding:
- New feature development (not a bug fix)
- Changes involving databases, environments, or deployments
- Publishing to external platforms
- When the user raises a new idea or new direction mid-task
6. Auto Hooks
Automatically configures Claude Code hooks on install:
- Command errors → auto-logged to error_log.md
- After editing a file → auto-reminder to verify
- When user corrects the AI → correction content auto-logged
Tracked Dimensions
| Dimension | What it tracks |
|---|---|
| Data Accuracy | Do displayed numbers/formulas match actual code |
| Environment Safety | Did config changes break login or the database |
| Foresight | Were permission, migration, or cache issues caught before deploy |
| User Perspective | Does the feature work end-to-end from the user's account |
| Verification | Was there a real end-to-end test after the fix |
| Memory Consistency | Did the AI read existing records instead of asking again |
| Tool Judgment | Did failing tools get swapped out promptly |
| Review Completeness | Were reviews and summaries thorough with nothing missed |
| Operational Precision | Did changes produce unintended side effects |
| Check Before Doing | With unfamiliar tools/versions, was documentation checked first |
| Conciseness | Was a 3-line solution turned into 300 lines |
| Regression Awareness | Did fixing one bug introduce a new one |
| Style Consistency | Does new code follow the project's existing style and architecture |
| Independent Judgment | When the user's premise is wrong, does the AI push back instead of blindly executing |
| Real-env Verification | Was the fix validated with the real production command, not a sandbox/test harness |
| Cross-agent Trust | Did the AI re-verify another agent's "pass" report instead of trusting it blindly |
| Dumb things humans will do | Not yet committed, but inevitable |
| Dumb things AI will do | Same, but the AI edition |
Real-world data
| Period | Errors | Notes |
|---|---|---|
| Week 1 | 29 | Baseline, 10 dimensions identified |
| Week 2 | 6 | Rules starting to take effect |
| Week 3 | ~0 | Still running |
Installation
Claude Code (recommended — full experience with auto hooks)
claude skill add summerliuuu/no-no-debug
Zero configuration. Tracking files and hooks are initialized automatically on first run.
If you're installing manually or upgrading from v1.1.0, also copy the hook scripts:
mkdir -p ~/.claude/hooks
cp hooks/user_prompt_filter.py ~/.claude/hooks/
cp hooks/post_tool_failure.sh ~/.claude/hooks/
chmod +x ~/.claude/hooks/post_tool_failure.sh
Then merge the PostToolUseFailure / UserPromptSubmit blocks from SKILL.md → Mechanism 6 into ~/.claude/settings.json.
Prerequisites. The shipped hooks use python3 (UserPromptSubmit filter) and jq (PostToolUseFailure parser). Both ship by default on recent macOS and most Linux distros; if either is missing the corresponding hook degrades gracefully — python3 missing means the correction detector silently no-ops, jq missing writes a one-time jq_not_installed marker to the log. Install with brew install jq (macOS) or your package manager if you want full coverage.
ChatGPT / Cursor / Copilot / Other AI assistants
- Copy the contents of SKILL.md
- Paste it into your system prompt or custom instructions
- Core features (3-gate checkpoint, periodic review, rule accumulation) work immediately
- Auto hooks are Claude Code only — other tools rely on the AI following the rules in SKILL.md
Compatibility
- Claude Code — full experience with auto hooks and one-line install
- Other AI coding assistants (ChatGPT, Cursor, Copilot, etc.) — paste SKILL.md content into your system prompt to use core features
- Any project, any language
- Pairs well with claude-mem (for session search)
⭐
Thank you for starring this project during installation.
📕 Xiaohongshu @Summer的AI日常 — AI coding in practice
🇨🇳 中文版
License
MIT — @summerliuuu
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found