research-mode
Health Warn
- No license — Repository has no license file
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 107 GitHub stars
Code Pass
- Code scan — Scanned 2 files during light audit, no dangerous patterns found
Permissions Pass
- Permissions — No dangerous permissions requested
This tool is an anti-hallucination toggle for Claude Code. It activates specific constraints that force the AI to cite sources, use direct quotes, and openly admit when it lacks the information to answer a question.
Security Assessment
The overall risk is Low. The light code audit found no dangerous patterns across its files, no hardcoded secrets, and no requests for risky system permissions. Since it acts primarily as a text prompt wrapper rather than an executable script, it does not independently access sensitive data, execute shell commands, or make external network requests.
Quality Assessment
The project is in active development, with its most recent updates pushed today. It has solid community trust for a niche utility, currently backed by 107 GitHub stars. However, the repository lacks a formal license file. While this is common for small prompt utilities, the absence of a license technically means all rights are reserved by the creator, which could be a minor concern if you plan to modify or redistribute the code.
Verdict
Safe to use, though corporate users should note the missing license if adapting the code for broader distribution.
Anti-hallucination research mode for Claude Code. Toggle on/off to enforce citation requirements and source grounding.
Research Mode for Claude Code
Anti-hallucination toggle for Claude Code. Activates three constraints from Anthropic's documentation that force Claude to cite sources, say "I don't know" when unsure, and ground responses in direct quotes.
Install
Works in Claude Code CLI, the macOS desktop app, and the VS Code extension. Open Claude Code and run these two commands:
Step 1 -- Add the marketplace:
/plugin marketplace add assafkip/research-mode
Step 2 -- Install the plugin:
/plugin install research-mode@assafkip-research-mode
That's it. To confirm it installed, run /plugin and check that research-mode appears in your list.
Alternative -- install as a standalone skill:
If you prefer not to use the plugin system, clone this repo and copy the SKILL.md file into your project's .claude/skills/research-mode/ directory.
Use
/research-mode:research
Or with a topic:
/research-mode:research what caused the Change Healthcare breach
Say "exit research mode" to turn it off.
What it does
Three constraints activate simultaneously:
- Say "I don't know" -- no guessing, no inferring. If there's no credible source, Claude says so.
- Cite everything -- every claim must reference a file, URL, paper, or named source. Unsourced claims get retracted.
- Quote first, then analyze -- responses are grounded in word-for-word quotes from source material, not paraphrased summaries.
What it doesn't do
- Not always-on. It's a toggle. Turn it on for research, off for creative work.
- Not slow. Claude still uses tools in parallel and works efficiently.
- Not restrictive on new ideas. You can synthesize across sources, but inputs must be grounded.
Why
LLMs hallucinate. When you're doing research that matters, you need guardrails that force citation discipline. This plugin packages Anthropic's own recommendations into a one-command toggle.
Troubleshooting
"Plugin not found" after Step 1: Make sure you typed the marketplace command exactly as shown. The marketplace name is assafkip/research-mode (GitHub username/repo).
Command doesn't appear after install: Run /reload-plugins to refresh, or restart Claude Code.
Want to scope it to one project only: Add --scope project to the install command in Step 2.
Built by
Assaf Kipnis -- built while running GTM, investor outreach, and content ops for KTLYST entirely through Claude Code. When your AI assistant is writing your pitch decks, researching competitors, and drafting investor briefs, hallucinated facts aren't a minor annoyance. They're a credibility risk. This toggle exists because I needed it.
Questions or feedback: [email protected]
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found