System-Prompt-Open

skill
Security Audit
Warn
Health Warn
  • No license — Repository has no license file
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 28 GitHub stars
Code Warn
  • Code scan incomplete — No supported source files were scanned during light audit
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This project is an open database and gallery containing extracted system prompts from over 45 commercial Large Language Models (LLMs). It is designed strictly for academic safety research and evaluating AI security vulnerabilities.

Security Assessment
The repository itself is a static HTML gallery and informational page, meaning it does not natively execute shell commands, access local files, or make hidden network requests. No hardcoded secrets or dangerous permissions were found. However, the tooling associated with this project (specifically the "JustAsk" extraction framework mentioned) interacts heavily with external AI APIs to pull sensitive data. Additionally, the automated code scan was unable to analyze the source files. Overall risk is rated as Low for simply browsing or hosting the gallery, but Medium if you intend to utilize the associated extraction tools or integrate these prompts into your own applications.

Quality Assessment
The project appears to be actively maintained, with very recent updates and a highly detailed README. It is backed by an academic paper and has garnered a respectable level of community trust with 28 GitHub stars. While the automated scan warned about a missing license, the documentation explicitly states the project uses the MIT license. It is highly transparent about its research goals and ethical boundaries.

Verdict
Safe to use for researchers and developers seeking reference material on LLM system prompts, but use with caution if you plan to adopt the actual prompt extraction methodologies.
SUMMARY

Open database of system prompts extracted from frontier LLMs using JustAsk

README.md
System Prompt Open

System Prompt Open

Your system prompt was never a secret.

arXiv
Gallery
Models
License: MIT
Maintained
GitHub Stars
GitHub Forks

Live Gallery | Paper | JustAsk Code


An open database of system prompts extracted from 45 commercial LLMs using JustAsk, a self-evolving code agent framework.
Verified at 85--95% accuracy against leaked Claude Code source.

[!CAUTION]
Research use only. System Prompt Open is released exclusively for academic safety research, responsible disclosure, and evaluation of LLM security.
We do not condone or permit any use of these materials for unauthorized extraction, prompt theft, or exploitation of commercial systems.

Latest News

Date Update
2026-04-01 Open-sourced System Prompt Open with 45 extracted system prompts and Live Gallery
2026-04-01 Gallery redesigned with red team theme, pagination, stat cards, and search
2026-04-01 Ground-truth verification: Claude Code extractions match leaked source at 85-95%

Gallery

Browse extracted system prompts interactively: x-zheng16.github.io/System-Prompt-Open

45 entries covering:

  • Claude Code (4 agents, verified against leaked source)
  • Gemini CLI (code agent)
  • 40 commercial LLMs (OpenAI, Anthropic, Google, Meta, DeepSeek, xAI, and more)

Ground-Truth Verification

Claude Code's source was leaked via .map file in the npm registry (March 2026).
We compared it against our JustAsk extractions from January 2026 -- two months before the leak.

Agent Accuracy Gap
Explore Subagent 95% Only missed pip install in bash restrictions
Plan Subagent 93% Minor output format embellishment
General-Purpose 90% Missed completeness directive
Main Agent 85% Missed 2 entire sections

How to Contribute

Step What to do
1. Extract Use JustAsk or your own method to extract a system prompt
2. Verify Run multiple extractions and compute self-consistency
3. Submit Open an Issue with the model name, extracted prompt, and consistency score

[!IMPORTANT]
We handle redaction before publishing. Do not worry about masking sensitive content in your submission.

Related Projects

From the same team:

  • ISC-Bench -- Internal Safety Collapse in Frontier LLMs
  • JustAsk -- Curious Code Agents Reveal System Prompts in Frontier LLMs
  • Awesome-Embodied-AI-Safety -- Safety in Embodied AI: Risks, Attacks, and Defenses
  • Awesome-Large-Model-Safety -- Safety at Scale: A Comprehensive Survey of Large Model and Agent Safety
  • XTransferBench -- Super Transferable Adversarial Attacks on CLIP (ICML 2025)
  • BackdoorLLM -- A Comprehensive Benchmark for Backdoor Attacks on LLMs (NeurIPS 2025)
  • BackdoorAgent -- Backdoor Attacks on LLM-based Agent Workflows

Citation

BibTeX:

@article{zheng2026justask,
  title={Just Ask: Curious Code Agents Reveal System
         Prompts in Frontier LLMs},
  author={Zheng, Xiang and Wu, Yutao and Huang, Hanxun
          and Li, Yige and Ma, Xingjun and Li, Bo
          and Jiang, Yu-Gang and Wang, Cong},
  journal={arXiv preprint arXiv:2601.21233},
  year={2026}
}

Plain text:

Xiang Zheng, Yutao Wu, Hanxun Huang, Yige Li, Xingjun Ma, Bo Li, Yu-Gang Jiang, and Cong Wang. "Just Ask: Curious Code Agents Reveal System Prompts in Frontier LLMs." arXiv preprint arXiv:2601.21233, 2026.

Star History

License

MIT

Reviews (0)

No results found