Prompt-sensei
Health Uyari
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 5 GitHub stars
Code Basarisiz
- process.env — Environment variable access in scripts/observe.ts
- process.env — Environment variable access in scripts/report.ts
- spawnSync — Synchronous process spawning in scripts/update.ts
Permissions Gecti
- Permissions — No dangerous permissions requested
This local-first prompt coaching tool helps developers improve prompts used with AI coding agents like Claude Code and Codex. It analyzes local history, provides feedback on rough prompts, and suggests improvements to build better prompting habits.
Security Assessment
Overall risk: Low. The tool operates entirely offline without any cloud telemetry, external network requests, or hardcoded secrets. It does not request any dangerous permissions. However, there are a few code-level flags to be aware of. It accesses environment variables in its observation and reporting scripts. Additionally, the auto-updater relies on synchronous process spawning (`spawnSync`). While not inherently malicious, executing shell commands always warrants slight caution, and users should review the updater script before running it.
Quality Assessment
The project uses the permissive Apache-2.0 license and is actively maintained, with its most recent code push occurring today. It features clear documentation and an ethical, privacy-focused design. The main drawback is its extremely low community visibility. Having only 5 GitHub stars means the codebase has not undergone widespread peer review. Developers should expect early-stage software and rely on their own code review rather than community validation.
Verdict
Safe to use, but perform a quick manual review of the update and environment variable scripts due to low community oversight.
Local-first prompt coach for Claude Code and Codex that improves prompts, observes prompting habits, and analyzes local history with explicit consent.
Prompt Sensei
Prompting is becoming one of the most important engineering skills in the AI era. Prompt Sensei helps you improve your prompts while teaching you how to improve.
Prompt Sensei is a quiet, local-first prompt coach for Claude Code and Codex, including IDE/plugin environments that can load those skills. It gives stage-aware feedback, rewrites rough prompts into better ones, looks back on local history when you ask, and helps you practice one habit at a time.
No cloud. No telemetry. No leaderboard. No raw prompt archive.
Prompt Sensei's score is a coaching signal, not an objective grade or a guarantee of better output. The real test is whether the improved prompt produces a more useful first draft, fewer clarification turns, safer agent behavior, and easier verification.
Why This Exists
AI coding agents do real work: they edit files, run commands, and make implementation choices. A vague prompt does not just produce a vague answer; it can produce broad edits, hidden assumptions, and extra revision loops.
Prompt Sensei is built around a simple belief:
- Prompting is an engineering skill.
- Engineering skills improve through feedback.
- Feedback should be kind, specific, private, and useful.
It does not treat fix this test as a bad prompt. It treats it as an early-stage prompt, then teaches the next step.
5-Minute Quickstart
Install for Claude Code:
git clone https://github.com/chengzhongwei/Prompt-sensei ~/.claude/skills/prompt-sensei
(cd ~/.claude/skills/prompt-sensei && npm install && npm run build)
Then start live coaching with /prompt-sensei observe inside Claude Code.
Install for Codex:
git clone https://github.com/chengzhongwei/Prompt-sensei ~/.codex/skills/prompt-sensei
(cd ~/.codex/skills/prompt-sensei && npm install && npm run build)
Then ask Codex in natural language: Use prompt-sensei observe mode.
Start live coaching in Claude Code:
/prompt-sensei observe
Try improving a rough prompt:
/prompt-sensei improve "fix this test"
Example output:
Prompt Sensei Improve
=====================
Stage: Exploration
Score: 70 / 100 (Good)
What is missing:
- failing test name
- expected behavior
- actual error output
Improved prompt:
Help me debug this failing test.
Test: [test name]
Expected: [what should happen]
Actual: [error output or wrong behavior]
Related file: [file path]
Return:
1. Likely root cause
2. Minimal fix
3. Test command to verify
Habit to practice next:
Add expected and actual behavior before asking for a fix.
See examples/prompt-gallery.md for more copyable before/after prompts.
Supported Environments
Prompt Sensei works best when the host tool can load the skill directly.
| Invocation style | Environments |
|---|---|
Direct skill command, e.g. /prompt-sensei improve "fix this test" |
Claude Code and compatible Claude Code skill environments |
Natural language, e.g. Use prompt-sensei to improve this prompt... |
Codex and IDE/plugin environments where slash commands are not available |
If Claude Code or Codex is running inside an IDE plugin and can access the installed skill, use the same Prompt Sensei workflow there.
/prompt-sensei observe is a Claude Code skill command. In Codex and other hosts without slash commands, natural-language requests depend on the host agent loading the installed skill. npm run init only creates Prompt Sensei's local consent/session record; it does not turn on live coaching by itself.
Commands
/prompt-sensei [observe|stop|improve|lookback|report|help|clear|update]
/prompt-sensei observe # start live coaching
/prompt-sensei stop # stop coaching this session
/prompt-sensei improve "fix this" # rewrite a prompt with one teaching note
/prompt-sensei lookback # analyze selected local prompt history
/prompt-sensei report # show local session trends
/prompt-sensei update # pull latest version and rebuild
/prompt-sensei clear # delete local Prompt Sensei data
/prompt-sensei help
For Codex, use natural language:
Use prompt-sensei observe mode.
Use prompt-sensei to improve this prompt: "fix this test"
Use prompt-sensei to look back at my recent prompts.
Use prompt-sensei to show my report.
For direct script checks:
npm run init # create local consent/session records
npm run observe # show observe-script usage when run interactively
How It Teaches
Prompt Sensei is stage-aware. A short exploration prompt can be reasonable early, while an execution prompt needs clearer context, boundaries, and verification.
| Stage | Meaning | Example |
|---|---|---|
| Exploration | Still figuring out the problem | why is this broken |
| Diagnosis | Have evidence or symptoms | expected /login, actual /dashboard |
| Execution | Want implementation or changes | implement this with these constraints |
| Verification | Want correctness checks | find edge cases and test commands |
| Reusable workflow | Want a repeatable process | create a code review checklist |
| Action | Short follow-through directive | ok commit and push to main |
The coaching line stays small:
[Sensei: 68/100 · Diagnosis; Tip: add the error message and file path]
Think of the score as prompt readiness for the current stage, not universal prompt quality. A 100/100 prompt can still produce a weak answer if the model lacks domain knowledge, the task is ambiguous outside the prompt, or the rubric does not fit the user's domain.
The report focuses on growth:
Average score: 81 / 100 (Good)
Most common type: implementation
Next habit: End prompts with the exact test command or edge cases.
For the full philosophy, read docs/philosophy.md. For scoring details, read docs/scoring-rubric.md.
How Do I Know It Helps?
Prompt Sensei does not prove that a prompt is better just because the number went up. It gives a structured coaching signal.
To check whether it helped, compare the original prompt and the improved prompt on the same task:
- Did the AI ask fewer clarification questions?
- Was the first answer more accurate or easier to act on?
- Did the agent make fewer broad or surprising changes?
- Was the result easier to verify?
If the improved prompt does not produce a better outcome, treat that as useful feedback for the rubric. Prompt Sensei is meant to teach better prompting habits, not replace your judgment.
For maintainers, npm run eval prints calibration fixtures that make it easier to review Sensei's stage choices, score bands, flags, and tips across weak, medium, strong, privacy-risk, safety-risk, and non-engineering prompts.
Lookback
/prompt-sensei lookback analyzes selected local Claude Code or Codex history after separate consent.
It can:
- find recent Claude Code and Codex sessions
- analyze one session or all sessions
- generate one-by-one coaching or a full lookback report
- save a markdown report only after confirmation
The flow is guided one step at a time: choose from a visible session list, choose the analysis format, choose the prompt count, then confirm consent.
Lookback reads raw history locally, redacts user prompts before analysis, and does not store raw history, prompt hashes, or derived metadata by default.
Optional Claude Code Hook
Add a UserPromptSubmit hook to ~/.claude/settings.json to record hash-only prompt metadata in the background after consent:
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "node ~/.claude/skills/prompt-sensei/dist/scripts/observe.js --hash-only",
"async": true,
"timeout": 10
}
]
}
]
}
}
Hook captures are excluded from scoring because hooks do not have enough conversation context to classify the prompt or choose a coaching tip.
Privacy
Prompt content is sensitive. Prompt Sensei stores nothing until you consent.
After consent, it stores local metadata only:
~/.prompt-sensei/events.jsonl— observation log~/.prompt-sensei/config.json— consent record~/.prompt-sensei/update-check.json— cached update status~/.prompt-sensei/reports/— optional saved lookback reports
Prompt Sensei's local scripts do not send prompt text, scores, reports, or local event data to a service. Lookback may show redacted user prompts to the current AI agent after separate consent.
See docs/privacy.md for details.
Contributing
Good first areas:
- realistic prompt improvement examples
- scoring rubric improvements
- redaction rule improvements
- report improvements
- support for other AI coding tools
Please keep changes aligned with the core philosophy: quiet, local-first, encouraging, and privacy-aware.
License
Apache-2.0 — Copyright 2026 Chengzhong Wei
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi