phd-skills
Health Pass
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 169 GitHub stars
Code Pass
- Code scan — Scanned 8 files during light audit, no dangerous patterns found
Permissions Pass
- Permissions — No dangerous permissions requested
This tool acts as a research integrity plugin for Claude Code, providing academic guardrails like paper auditing, citation verification against databases like DBLP, and methodology reviews for PhD workflows.
Security Assessment
Overall risk: Low. The tool does not request dangerous permissions or contain hardcoded secrets. A light code scan across 8 files found no dangerous patterns. While it mentions setting up integrations with external services like ntfy, Slack, or email for notifications (which implies making network requests), the repository itself boasts zero dependencies. It primarily operates by injecting structured skills and guidelines into your local AI environment rather than executing risky background shell commands.
Quality Assessment
The project is highly transparent and actively maintained, with its most recent push occurring today. It holds a permissive MIT license, making it fully open and safe for personal or commercial use. Community trust is solid, backed by 169 GitHub stars. Additionally, it is authored by an established researcher who built the guardrails based on extensive real-world academic usage, which adds significant credibility to the tool's reliability and intent.
Verdict
Safe to use.
Research integrity plugin for Claude Code — paper auditing, citation verification, experiment analysis, and methodology-first skills for academic workflows.
phd-skills
Research integrity plugin for Claude Code — paper auditing, citation
verification, experiment analysis, and methodology-first skills for
academic workflows.
Built by Fatih Cagatay Akyon
(1300+ citations, 7 patents) after 200+ Claude Code sessions, tens of
critical AI mistakes caught the hard way, and thousands of hours of
PhD research. Every guardrail in this plugin traces to a real mistake.
Why This Plugin Exists
I use Claude Code daily for my PhD. It's powerful, but it
makes research-specific mistakes that cost hours:
- It typed "done?" as "dont?" and launched an unwanted upload of thousands of files
- It analyzed my full dataset when I asked for a specific 4k/2k/2k split
- It claimed a test covered a bug it had never actually verified
- It never once looked at a figure it generated — just trusted the numbers
Other plugins give you more commands. This plugin gives you guardrails.
Install
claude plugin marketplace add fcakyon/phd-skills
claude plugin install phd-skills@phd-skills
Then run /phd-skills:setup inside Claude Code to configure notifications, LaTeX, and allowlist.
Usage
Open Claude Code in your paper directory, then:
/phd-skills:xray— audit paper against code and data across 5 dimensions, get prioritized fixes/phd-skills:factcheck— verify all BibTeX entries and cited claims against DBLP/phd-skills:fortify CVPR— anticipate reviewer questions, rank ablations, and suggest paper improvements/phd-skills:gaps neural architecture search— find what's missing in the literature/loop 30m check experiment logs, notify me if metrics beat the baseline or if loss starts to diverge"check if my numbers match the code"— skills auto-trigger, no slash command needed"make code publish ready"— prepares code for open-source release with license, docs, and reproducibility checks
After running /phd-skills:setup, all Claude Code notifications (task completion,
background agents) are forwarded to your configured service (ntfy/Slack/email).
What You Get
Commands
| Command | What it does |
|---|---|
/phd-skills:xray |
Audit paper against code and data (5 parallel dimensions) |
/phd-skills:factcheck |
Verify BibTeX entries and cited claims against DBLP |
/phd-skills:gaps <topic> |
Literature gap analysis with web confirmation |
/phd-skills:fortify [venue] |
Select strongest ablations + anticipate reviewer questions |
/phd-skills:setup |
Interactive onboarding (notifications, allowlist, LaTeX) |
/phd-skills:help |
Show all features at a glance |
Skills (auto-trigger — just describe what you need)
| When you say... | Skill activates |
|---|---|
| "design an ablation study" | Experiment Design |
| "find related papers on X" | Literature Research |
| "review my methods section for consistency" | Paper Verification |
| "check if my numbers match the code" | Paper Verification |
| "analyze dataset bias" | Dataset Curation |
| "prepare code for open-source release" | Research Publishing |
| "what will reviewers ask about this?" | Reviewer Defense |
| "setup latex for CVPR" | LaTeX Setup |
Agents (Claude delegates automatically)
| Agent | What it does | Special |
|---|---|---|
paper-auditor |
Cross-checks paper claims vs code and data | Runs in isolated worktree, remembers patterns across sessions |
experiment-analyzer |
Analyzes results from wandb/neptune/local/any format | Can schedule monitoring via cron, sends SSH notifications |
Research Guardrails (run silently — you never invoke these)
| What it catches | Real incident that inspired it |
|---|---|
| Conclusions and recommended actions reviewed against actual artifacts (configs, run dirs, metric files) by a fresh-context research peer | Claude removed introduction novelty claims, analyzed wrong data split, dropped a verification question mid-commit |
| Missing citation verification when editing .tex/.bib | Claude propagated unverified author names and venue info |
| LaTeX compilation errors after .tex edits | Errors compounded across multiple edits before being caught |
| Unreviewed generated images/figures | Claude analyzed metrics but never looked at the actual plots |
| Research state loss before context overflow | Long research sessions lost context, leading to rushed conclusions |
How It Compares
| phd-skills | flonat/claude-research | Others | |
|---|---|---|---|
| Commands to learn | 6 | 39 | 13-20 |
| Research integrity hooks | 5 (agent + command) | 1 | 0 |
| Paper-code consistency audit | 5-dimension parallel | Read-only, no code cross-ref | None |
| Experiment monitoring + SSH notifications | Yes (ntfy/slack/email) | No | No |
| External dependencies | None | npm + pip + MCP servers | MCP required |
| Install time | 30 seconds | 10+ minutes | Varies |
Design Principles
- No MCP dependency — works on any machine, including SSH
- Methodology over scripts — skills teach the approach, Claude generates code for your specific setup (wandb, neptune, local files, whatever)
- Human oversight first — Claude makes premature claims and jumps to conclusions. Every skill builds in verification checkpoints
- Actionable output — ranked suggestions with specific fixes, never just a list of findings
License
MIT — use it, fork it, adapt it to your research.
Built with frustration and care during a PhD at METU.
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found