Rstack
Health Warn
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 6 GitHub stars
Code Fail
- rm -rf — Recursive force deletion command in test/telemetry-test.sh
Permissions Pass
- Permissions — No dangerous permissions requested
This skill provides a suite of automation commands for Claude Code, designed to streamline academic research. It assists users by chaining tasks like literature reviews, cloud GPU provisioning via Modal, and LaTeX formatting to help generate a submittable paper from a single prompt.
Security Assessment
The overall risk is Medium. The tool inherently executes shell commands to set up environments and run experiment pipelines. The automated scanner flagged the presence of a recursive force deletion command (`rm -rf`) inside a telemetry test script. While often used for harmless temporary file cleanup, this requires manual verification to ensure it cannot be maliciously repurposed or accidentally misused. The installation process also involves cloning external code directly into your local environment. The tool does not request explicitly dangerous system permissions and no hardcoded secrets were detected, though it does interact with external cloud services like Modal.
Quality Assessment
The project is actively maintained, with repository activity as recent as today. It is properly licensed under the standard MIT License and includes clear, structured documentation. However, community trust and visibility are currently very low. With only 6 GitHub stars and a relatively small user base, the codebase has not undergone widespread peer review or battle-testing by the broader developer community.
Verdict
Use with caution — verify the exact behavior of the deletion commands in the test scripts and external API calls before integrating into your workflow.
Research automation skills for Claude Code. Idea to submittable paper in one session.
RStack — Claude code for Research
Research automation skills for Claude Code. Type /research and go from idea to submittable paper. Each skill works standalone but chains together into a full pipeline.
A PhD student with a deadline spends 80% of their time on grunt work: finding papers, provisioning GPUs, formatting LaTeX. RStack compresses that to near-zero. The thinking stays with the researcher.
Skills
| Skill | What it does | When to use |
|---|---|---|
/research |
Full pipeline: idea to paper in one session | "Write a paper about...", "research this" |
/lit-review |
Find papers, structured summary, gap analysis | "Find papers about...", "literature review" |
/novelty-check |
Assess novelty, refine hypothesis | "Is this novel?", "check existing work" |
/experiment |
Generate code, run on cloud GPU (Modal), iterate | "Run experiments", "train a model" |
/analyze-results |
Publication-ready figures, tables, statistics | "Make figures", "analyze results" |
/write-paper |
Venue-formatted LaTeX with real results and citations | "Write the paper", "format for arXiv" |
/setup |
Configure Modal, tectonic, venue preferences | First-time setup |
/rstack-upgrade |
Upgrade to latest version | "Upgrade rstack" |
Install (30 seconds)
git clone --single-branch --depth 1 https://github.com/sunnnybala/Rstack.git ~/.claude/skills/rstack
cd ~/.claude/skills/rstack && ./setup
Then in Claude Code, run /setup to configure Modal and install tectonic.
For teams (vendored into project):
cp -Rf ~/.claude/skills/rstack .claude/skills/rstack
rm -rf .claude/skills/rstack/.git
cd .claude/skills/rstack && ./setup --local
Quick Start
The full pipeline:
/research "Investigate whether mixture-of-experts improves efficiency of small language models on code generation tasks"
Individual skills:
/lit-review "transformer efficiency for code generation"
/novelty-check # compare idea against found papers
/experiment # generate and run experiments on Modal
/analyze-results # create figures and tables
/write-paper # write arXiv-formatted paper
The Research Pipeline
IDEA → /lit-review → /novelty-check → /experiment → /analyze → /write-paper → PAPER
↑ ↑ ↑ ↑ ↑ ↑
└─────────┴──────────────┴───────────────┴────────────┴────────────┘
revision loops at every checkpoint
Every phase transition is a human checkpoint. You approve the literature review before novelty assessment. Approve the experiment plan before cloud submission. Review each paper section before the next. The pipeline is iterative, not linear.
How it Works
Each skill is a SKILL.md file that Claude Code reads and follows. No backend, no database, no custom agents. Work products live at your project root as normal files. Structured logs persist in .rstack/.
Cloud compute happens through Modal CLI commands that Claude runs directly, same pattern as GStack running git push or gh pr create.
Architecture
- Pure SKILL.md files — no Express, no React, no Postgres. Claude Code IS the runtime.
- Work products at project root — visible files (paper.tex, figures, idea.md). JSONL plumbing in
.rstack/. - Modal for cloud compute — Claude runs
modal run train.pydirectly. No wrappers. - Two-phase install — offline bootstrap (
./setup) + interactive auth (/setupskill). - Credentials in native stores — Modal auth stays in
~/.modal.toml. Never in RStack config.
See ARCHITECTURE.md for the full design rationale.
Requirements
- Claude Code (or any Claude Code-compatible agent)
- Python 3.8+
- Modal (for cloud GPU experiments):
pip install modal && modal token new - tectonic (for LaTeX compilation): installed via
/setup
Project State
Work products live at the project root as normal, visible files. Internal plumbing
(structured JSONL logs) lives in .rstack/.
my-project/ # Git root
├── idea.md # Your research idea
├── lit-review.md # Human-readable literature review
├── refined-idea.md # Sharpened hypothesis (from /novelty-check)
├── novelty-assessment.md # Novelty analysis with score
├── experiment-plan.md # Experiment design document
├── train.py # Generated experiment code
├── requirements.txt # Experiment dependencies
├── results/ # Raw outputs from cloud
│ └── run-001/
│ ├── metrics.json
│ ├── stdout.log
│ └── figures/
├── analysis/ # Publication-ready figures + tables
│ ├── figures/ # PNG + PDF
│ ├── tables/ # LaTeX source
│ └── stats.json # Statistical summary
├── paper.tex # The paper
├── paper.bib # BibTeX citations
├── paper.pdf # Compiled paper
└── .rstack/ # Internal plumbing (hidden)
├── lit-review.jsonl # Structured paper records
└── experiments.jsonl # Append-only experiment log
Configuration
Global config at ~/.rstack/config.yaml:
bin/rstack-config get venue # read: arxiv
bin/rstack-config set venue icml # write
bin/rstack-config list # show all
Privacy & Telemetry
RStack respects your privacy. Telemetry is off by default.
| Tier | What's collected | Shared remotely |
|---|---|---|
off (default) |
Nothing | No |
anonymous |
Skill name, duration, outcome, OS | Yes (no device ID) |
community |
Same + stable device ID | Yes |
No code, file paths, repo names, or research content is ever collected or sent.
rstack-config set telemetry community # opt in
rstack-config set telemetry off # opt out
rstack-analytics # see your local data
Comparison
| RStack | AutoResearch (Karpathy) | Sakana AI Scientist | Ignis | |
|---|---|---|---|---|
| Scope | Full pipeline | Experiment loop only | Full pipeline | Full pipeline |
| Infrastructure | None (SKILL.md files) | None (630 lines Python) | Custom agents | Express+React+Postgres+Modal |
| Cloud compute | Modal (direct CLI) | Local GPU only | Custom | Modal (custom runner) |
| Paper writing | Yes (venue-formatted) | No | Yes (42% failure rate) | Yes |
| Human-in-the-loop | Every phase boundary | Manual stop | Minimal | Per-phase |
| Install | 30 seconds | 30 seconds | Complex | Complex |
Documentation
- README.md — this file
- ARCHITECTURE.md — why RStack is built this way
- CLAUDE.md — development commands, project structure, config reference
- CONTRIBUTING.md — how to add skills and contribute
- CHANGELOG.md — release notes
- ETHOS.md — research philosophy (rigor, reproducibility, novelty)
Inspired By
- GStack — engineering skills for Claude Code. RStack follows its architecture exactly.
- AutoResearch — Karpathy's autonomous experiment loop. Inspired RStack's /experiment skill.
- Ignis — prior research automation platform. Agent prompts extracted into SKILL.md format.
License
MIT
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found