llm-wiki-pm
Health Warn
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 5 GitHub stars
Code Pass
- Code scan — Scanned 11 files during light audit, no dangerous patterns found
Permissions Pass
- Permissions — No dangerous permissions requested
No AI report is available for this listing yet.
PM knowledge base skill for Claude Code. Karpathy's LLM Wiki pattern with supersession, privacy, crystallize, and qmd hybrid search.
llm-wiki-pm
A Claude Code skill that turns your PM work into a persistent, compounding
knowledge base. Ingest meetings, analyst reports, and strategy docs.
Query across months of context. Let the agent handle the bookkeeping.
Based on Andrej Karpathy's LLM Wiki pattern,
tuned for product management: competitive intel, customer notes, strategy,
roadmap, AI market tracking.
Why
Most PM work today spreads across Slack threads, meeting transcripts, analyst
PDFs, and one-off notes. RAG tools like NotebookLM retrieve from raw sources
each query, so knowledge never compounds. Personal wikis fail because the
bookkeeping overhead outgrows the value.
This skill gives you the middle path: you curate sources, the agent maintains
an interlinked markdown wiki that stays current. Every ingest touches 5-15
pages. Every query cites specific wiki entries. The wiki compounds.
What you get
- Three-layer architecture: immutable
raw/sources, agent-owned wiki
pages, andSCHEMA.mdgoverning structure - Ingest / query / update / lint / archive flows with discipline guardrails
- Supersession with auto-redirect of inbound links
- Crystallize pattern: transcripts become structured decision digests
- Privacy-first: pre-ingest filter +
private:frontmatter flag - Bundled wiki-search: semantic + TF-IDF search over your whole wiki (auto-indexes on startup)
- Obsidian-compatible: works as a vault out of the box
- Worker agents: five subagents (indexer, fetcher, link-validator, lint, people-updater)
handle expensive ops without bloating the lead session - Role packs: PM, researcher, executive, founder personas tune proactive
recall sensitivity, output format, and crystallize templates per session - Sub-skills (optional, install as needed):
llm-wiki-brief— daily/weekly briefs, tag digests, coverage briefllm-wiki-prd— PRD drafts, user stories, release notes (wiki-grounded, no fabrication)llm-wiki-research— research sprints, competitive deep dives, stub enrichmentllm-wiki-crm— relationship health, account health dashboard, feature ask tracker, auto-enrichment
- MY-INTEGRATIONS.md: learned source routing — auto-populated from actual ingest activity
- Next.js embed path for platform deployments
Install
Option A: Claude Code plugin (recommended)
claude plugin marketplace add anh-chu/llm-wiki-pm
claude plugin install llm-wiki-pm@anh-chu-plugins
Option B: Standalone skill via npx skills
npx skills add anh-chu/llm-wiki-pm --skill llm-wiki-pm -a claude-code -g
Install individual sub-skills the same way:
npx skills add anh-chu/llm-wiki-pm --skill llm-wiki-crm -a claude-code -g
npx skills add anh-chu/llm-wiki-pm --skill llm-wiki-brief -a claude-code -g
This installs the skill with all its scripts, templates, and references. What you don't get without the full plugin: session hooks (auto-scaffold, wikilink validation, log rotation), worker agents, and role packs. Core wiki operations (ingest, query, update, lint) work fine — you just need to create the wiki directory structure manually on first use.
Option C: Symlink from a clone
git clone https://github.com/anh-chu/llm-wiki-pm ~/llm-wiki-pm
mkdir -p ~/.claude/skills
ln -s ~/llm-wiki-pm/skills/llm-wiki-pm ~/.claude/skills/llm-wiki-pm
Same trade-offs as Option B. For hook automation and auto-scaffold, also install the plugin:
claude plugin install llm-wiki-pm@anh-chu-plugins
Then: start Claude Code
Restart Claude Code after enabling the plugin. You will be prompted for your wiki path and domain. The SessionStart hook creates the wiki directory on the first session start.
If you used Option B without installing the plugin, set WIKI_PATH before starting Claude Code:
echo 'export WIKI_PATH=$HOME/pm-wiki' >> ~/.bashrc && source ~/.bashrc
Full setup, including mobile Obsidian sync, in
GETTING_STARTED.md.
How it compares
| llm-wiki-pm | kfchou/wiki-skills | lewislulu/llm-wiki-skill | lucasastorian/llmwiki | NotebookLM | |
|---|---|---|---|---|---|
| Shape | Single skill | 5 skills | Skill + plugin + server | Full web app | SaaS |
| Storage | Plain markdown | Plain markdown | Plain markdown | Supabase + S3 | Cloud |
| Search | Bundled semantic + TF-IDF (wiki-search) + backlinks | grep + index | grep + index | PGroonga | Proprietary |
| Update discipline | Diffs + supersession fields + auto-link rewrite | Diffs + source cite | Human-in-loop audit | None explicit | N/A |
| Privacy | Pre-ingest filter + private: flag |
None | None | User-scoped | SaaS ToS |
| Transcript support | crystallize flow (decisions + actions) |
Generic ingest | Generic ingest | Generic ingest | Source-only |
| Install target | Claude Code | Claude Code | OpenClaw / Codex | Self-host web | SaaS |
| Ops burden | None (local files) | None | Obsidian plugin + Node server | Supabase + S3 + OCR | Zero |
| Scales to 1000+ pages | Yes (wiki-search, bundled) | Degrades | Degrades | Yes | Yes |
| PM-tuned taxonomy | Yes (competitive, customer, strategy, roadmap, ai) | No | No | No | No |
For a deeper breakdown of which Karpathy and Rohit v2 ideas this implements,
see the design notes below.
Target users
Good fit if you:
- Work as a PM, analyst, researcher, or founder with lots of meetings and reports
- Want a local-first, markdown-based knowledge base that you own
- Use Claude Code as your primary agent
- Are comfortable on a terminal (you'll occasionally run
lint.py)
Not a fit if you:
- Want a zero-terminal SaaS → use NotebookLM
- Need team collaboration out of the box → use Notion or a shared Obsidian vault
- Don't use Claude Code →
AGENTS.mdis the portable reference if you want to port to another agent
Layout
llm-wiki-pm/
├── AGENTS.md # Universal agent behavioral contract
├── CONTRIBUTING.md
├── .claude-plugin/
│ ├── plugin.json
│ └── marketplace.json
├── .claude/
│ ├── agents/ # Worker subagents
│ │ ├── worker-wiki-indexer.md
│ │ ├── worker-source-fetcher.md
│ │ ├── worker-link-validator.md
│ │ ├── worker-lint.md
│ │ └── worker-people-updater.md
│ └── roles/ # Role packs
│ ├── _template.md
│ ├── product-manager.md
│ ├── researcher.md
│ ├── executive.md
│ └── founder.md
├── hooks/
│ └── hooks.json
├── scripts/
│ └── update-safe.sh
└── skills/
├── llm-wiki-pm/ # Core skill (required)
│ ├── SKILL.md
│ ├── hooks/ (session-start.sh, post-write.sh, session-stop.sh)
│ ├── references/
│ ├── scripts/ (lint.py, backlinks.py)
│ └── templates/ (SCHEMA.md, index.md, overview.md, log.md, persona.md,
│ MY-INTEGRATIONS.md)
├── llm-wiki-brief/ # Optional: daily/weekly briefs, tag digests
│ └── SKILL.md
├── llm-wiki-prd/ # Optional: PRD drafts, user stories, release notes
│ └── SKILL.md
├── llm-wiki-research/ # Optional: research sprints, competitive deep dives
│ └── SKILL.md
└── llm-wiki-crm/ # Optional: relationship health, CRM layer
├── SKILL.md
└── templates/
└── SCHEMA-crm-fields.md
Setup
Two scenarios documented in detail in GETTING_STARTED.md:
- Human user with Claude Code: enable plugin, wiki auto-scaffolds on first session
- Application-orchestrated: programmatic wiki provisioning for platform deployments
Quick Start
1. Bootstrap a wiki
Enable the plugin. When prompted, enter your wiki path and domain.
On the first session start, the SessionStart hook creates:~/pm-wiki/ with SCHEMA.md, index.md, log.md, overview.md, and
the raw/entities/concepts/comparisons/queries/_archive subdirectories.
1b. Wiki search (bundled, automatic)
Semantic + TF-IDF search over your wiki is bundled via wiki-search
(@wirux/mcp-markdown-vault). It auto-indexes on startup — no setup needed.
~80MB model downloads on first use, cached in .markdown_vault_mcp/ inside
your wiki directory. Add .markdown_vault_mcp/ to .gitignore.
2. Review SCHEMA.md
Open ~/pm-wiki/SCHEMA.md. Adjust:
- Domain statement (scope)
- Tag taxonomy (add/remove tags for your specific accounts, competitors, themes)
- Page thresholds (tune later after a few ingests)
3. Install as Claude Code skill
Option A, user-level (available in every project):
mkdir -p ~/.claude/skills
ln -s /home/sil/llm-wiki-pm/skills/llm-wiki-pm ~/.claude/skills/llm-wiki-pm
Option B, project-level (per-repo):
mkdir -p .claude/skills
ln -s /home/sil/llm-wiki-pm/skills/llm-wiki-pm .claude/skills/llm-wiki-pm
Restart Claude Code. Verify with /skills, llm-wiki-pm should appear.
The skill auto-activates on ingest/query/update/lint phrasing (see SKILL.md
"When This Skill Activates").
4. First ingest
In Claude Code:
"Ingest this Gartner Magic Quadrant report: "
Claude reads SKILL.md, orients on SCHEMA + index + log + overview, surfaces
takeaways, creates/updates pages, logs.
5. Run lint periodically
python3 /home/sil/llm-wiki-pm/skills/llm-wiki-pm/scripts/lint.py ~/pm-wiki
# opens queries/lint-YYYY-MM-DD.md
6. Mobile access (optional)
See references/obsidian-sync.md for obsidian-headless + systemd setup.
Workflow Patterns
Weekly competitive digest: ingest 3-5 analyst links in one session.
Batch updates, one log entry, refresh overview.md.
Pre-meeting prep: query "what do we know about ?" → Claude
reads entities/<customer>.md + recent log → offer to file post-meeting update.
Monthly 1:1 follow-up: ingest transcript → extract decisions/themes →
update relevant concept pages → link from person page.
Quarterly review: lint, triage, rotate log, refresh overview, prune
tag taxonomy.
Scope
Wiki = long-term curated facts + sources you review, cite, and share with
colleagues. Not a replacement for session memory or notes apps. If you pair
with a memory tool (Claude's native memory, mem0, Hindsight), keep them
non-overlapping: wiki holds facts + sources, memory holds persona + session
state.
License
Tests
Hook scripts and plugin manifest validation:
python3 -m venv .venv && .venv/bin/pip install pytest -q
.venv/bin/python -m pytest tests/ -v
43 tests covering scaffold, wikilink validation, log rotation, stdin parsing,
and plugin manifest compliance. All tests create isolated temp wikis and feed
the real Claude Code hook JSON schema to the scripts.
MIT.
Credits
Built on prior art from:
- Andrej Karpathy, LLM Wiki
, the original pattern: stop re-deriving, start compiling. Three-layer
architecture, Memex lineage, and the insight that LLMs are the first
librarians who don't get bored of bookkeeping. - Rohit G, LLM Wiki v2
, lifecycle concepts (supersession, privacy, crystallization, self-healing
lint). We cherry-picked the four highest-ROI v2 ideas for PM work. - kfchou/wiki-skills: update
discipline with diffs and stale-claim sweep, tiered lint reports, evolvingoverview.mdsynthesis. - lewislulu/llm-wiki-skill
, audit/feedback loop design (inspiration for future team-mode support). - @wirux/mcp-markdown-vault:
semantic + TF-IDF search engine for markdown vaults (bundled as wiki-search).
Design notes
How this skill maps to Karpathy's original gist and Rohit's v2 extensions:
Karpathy core (10/10)
- Three-layer architecture (raw sources, agent-owned wiki, schema)
- LLM owns the wiki; human curates sources
- Ingest / query / lint operations
index.mdcontent catalog +log.mdchronological record- File good answers back as pages (
queries/dir) - Obsidian compatibility (Graph, Dataview, frontmatter)
- Schema as key configuration, co-evolved
- Ingests touch 10-15 pages routinely
- Bundled semantic wiki-search via
@wirux/mcp-markdown-vault - Multi-format outputs: Marp, matplotlib, CSV, Mermaid, Canvas
v2 cherry-picks (7/16)
Implemented:
- Explicit supersession with
supersedes:/superseded_by:fields + auto-redirect - Privacy filter (pre-ingest checklist +
private:frontmatter flag) - Self-healing lint (
--auto-fixfor safe repairs) - Crystallization (transcript → decision digest)
- Schema as the real product
- Contradiction handling with frontmatter flag
- Backlink tracing (
scripts/backlinks.pyfor structural refs)
Intentionally skipped for solo PM use (overkill):
- Confidence decay curves
- Consolidation tiers (working/episodic/semantic/procedural memory)
- Typed knowledge graph with relationship types
- Multi-agent mesh sync
- Quality scoring pipeline
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found