analytics-skills
Health Warn
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 5 GitHub stars
Code Pass
- Code scan — Scanned 3 files during light audit, no dangerous patterns found
Permissions Pass
- Permissions — No dangerous permissions requested
This project provides a collection of instruction sets (skills) and tool-maps for AI agents, allowing them to analyze web analytics like a senior analyst. It supports platforms like GA4, Amplitude, and Mixpanel, teaching the AI how to accurately diagnose traffic, evaluate funnels, and read A/B tests.
Security Assessment
The overall risk is Low. The light code audit found no dangerous patterns, hardcoded secrets, or requests for risky permissions. Because this acts as an MCP server that reads analytics data, it inherently requires access to your analytics platforms, which often contain sensitive business metrics. However, the tool itself appears to act as an analytical framework rather than executing arbitrary shell commands or making independent, malicious network requests.
Quality Assessment
The project is very new and currently has low community visibility (only 5 GitHub stars). Despite the lack of widespread adoption, it is under active development (last updated today). It is properly licensed under the standard MIT license, making it safe for commercial and private use. It features extensive, high-quality documentation and clear instructions for setup.
Verdict
Safe to use.
Skills for AI agents to read web analytics like a senior analyst: diagnose traffic, judge channels, read funnels, declare typed events, and read A/B tests. Tool-maps for Amplitude, Clamp, GA4, Mixpanel, and PostHog.
analytics-skills
Analytics skills for Claude, Cursor, and other AI agents. Read web analytics like a senior analyst: diagnose traffic changes, judge channel quality, read funnels, declare typed events, and read A/B tests without the usual rookie mistakes. Ships with tool-maps for Amplitude, Clamp, GA4, Mixpanel, and PostHog; add your own under tool-maps/. The schema-authoring and experiment-reading skills are built on the open Event Schema spec.
What's in the box
| Skill | What it does |
|---|---|
analytics-profile-setup |
One-time interview that captures your business context (model, primary conversion, traffic range, ICP, data stack) into a local analytics-profile.md. Every other skill reads this file so answers are calibrated to your industry and scale. Run this first. |
analytics-diagnostic-method |
The spine. Five-step method: load profile, frame the question, build a MECE hypothesis tree, triangulate, present with the Pyramid Principle. Covers signal vs noise and Simpson's paradox. Referenced by every other skill. |
traffic-change-diagnosis |
Drill path for "why did traffic change". Fingerprints for tracking regressions, bot spikes, deploy-correlated drops, campaign ramps, SEO decay, and platform changes. Measurement checks first, always. |
channel-and-funnel-quality |
Volume × engagement × conversion as a matrix. Vanity-traffic detection. Expected drop-off ranges per funnel step type. Mix-shift handling. Industry-specific benchmarks. |
metric-context-and-benchmarks |
What's a good bounce / engagement / duration / CVR / churn / LTV:CAC / activation, by model. When each metric lies. Minimum sample sizes before trusting a rate. |
event-schema-author |
Authors event-schema.yaml from existing track() calls. A portable, typed declaration of every product analytics event the codebase fires. The CLI generates a TypeScript type so call sites are autocompleted and type-checked at build time. Vendor-neutral; works with any analytics SDK. |
experiment-result-reader |
Read a running A/B test honestly. Pulls per-variant exposure and conversion counts, computes lift, applies sample-size and sequential-testing discipline, checks for mix-shift and sample-ratio mismatch, and returns a verdict with caveats instead of a false-positive. Reads the experiment from the experiments: section of event-schema.yaml when present. |
Built on real research
Frameworks and benchmarks the skills lean on, with sources:
- Method: Minto's MECE and Pyramid Principle (1985), Simpson (1951) for the mix-shift trap, standard 95% CI sample-size rules for noise-vs-signal calls.
- Benchmarks: Unbounce 2024 (57M conversions), Wordstream 2025, Ruler 2025 (100M+ data points by industry), Littledata Shopify 2023, Imperva Bad Bot Report, Mixpanel, ChartMogul, David Skok SaaS Metrics 2.0.
- Definitions: GA4 metric definitions taken from Google's docs (note: GA4 bounce rate is not the UA single-pageview bounce rate).
Install
Pick whichever fits your setup. All three install the same skills.
Claude Code, plugin (recommended)
# Add the marketplace once
/plugin marketplace add clamp-sh/analytics-skills
# Install
/plugin install analytics-skills@clamp-sh
Skills become available as /analytics-skills:<name> and are auto-invoked when the task matches.
Any agent (Cursor, Claude Code, Copilot, others)
Install via Vercel's open skills CLI:
npx skills add clamp-sh/analytics-skills
Works with any tool that supports the Agent Skills spec.
Claude Code, standalone (no plugin machinery)
Clone into your personal skills directory:
git clone https://github.com/clamp-sh/analytics-skills.git /tmp/analytics-skills
cp -R /tmp/analytics-skills/skills/* ~/.claude/skills/
Or run the bundled installer which symlinks each skill (so git pull updates them):
git clone https://github.com/clamp-sh/analytics-skills.git ~/.analytics-skills
~/.analytics-skills/scripts/install-personal.sh
For a single project, clone into .claude/skills/ inside the repo instead.
Codex CLI
Same SKILL.md format, different location. User-level install:
git clone https://github.com/clamp-sh/analytics-skills.git /tmp/analytics-skills
cp -R /tmp/analytics-skills/skills/* ~/.codex/skills/
For a project, copy into .agents/skills/ at the repo root instead. Codex auto-detects new skills; restart if needed.
claude.ai
Build the per-skill zips and upload each via Settings → Features → Skills:
./scripts/build-claude-ai-zips.sh
# Produces dist/analytics-diagnostic-method.zip etc.
Pre-built zips for each release are also attached to the GitHub release page.
Usage
These are model-invoked skills. You don't need to call them by name; just ask real analytics questions and Claude will load the relevant skill.
First run, on a new project:
Set me up. I want the skills calibrated to my business.
This loads analytics-profile-setup, walks a 5-minute interview, and writes analytics-profile.md to the repo root. Every subsequent question gets industry-aware answers.
After setup, ask real questions:
Traffic dropped 30% on Tuesday. What happened?
Loads traffic-change-diagnosis. Walks the hypothesis tree (measurement → time-shape → channel → cohort → content), pulls numbers from your analytics source, returns a diagnosis. Not a screenshot of a chart.
Is our signup funnel broken? Pricing → checkout is converting at 14%.
Loads channel-and-funnel-quality and metric-context-and-benchmarks. Compares against expected step drop-off, slices by cohort, flags whether 14% is low, normal, or suspicious given your sample size and model.
Our bounce rate is 68%. Is that bad?
Loads metric-context-and-benchmarks. Handles the GA4 vs UA definition gotcha, looks up the relevant page-type range, flags the sample-size caveat if needed.
Supported analytics platforms
The skills are platform-neutral; per-platform MCP invocations live in tool-maps/. One file per supported analytics tool, all covering the same canonical 17-row workflow taxonomy.
| Tool | Tool-map | Surface |
|---|---|---|
| Amplitude | amplitude.md |
query_amplitude_data covers most rows; cohorts and experiments are dedicated tools |
| Clamp | clamp.md |
dedicated MCP tool per row of the canonical taxonomy |
| GA4 | ga4.md |
run_report for aggregate rows; funnels and cohort retention not exposed by the wrapper |
| Mixpanel | mixpanel.md |
Run-Query types (insights, funnels, flows, retention) |
| PostHog | posthog.md |
Trends/Funnels/Retention/Paths insights plus HogQL |
The full row-by-row coverage matrix is at tool-maps/capability-matrix.md. analytics-profile-setup records the active platform in analytics-profile.md under tool_map:; downstream skills load the matching tool-map automatically.
Using a different analytics source? The method still applies. Add a tool-map for it (see tool-maps/README.md for the template).
Contributing
Issues and PRs welcome. See CONTRIBUTING.md for how to add a skill and the style conventions we follow.
License
MIT. See LICENSE.
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found