bbc-skill

agent
Guvenlik Denetimi
Uyari
Health Uyari
  • No license — Repository has no license file
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 12 GitHub stars
Code Gecti
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
This agent-native CLI fetches and aggregates all comments and metadata from Bilibili videos. It is designed for content creators to export their own comment data into structured JSON formats for audience and sentiment analysis by AI coding agents.

Security Assessment
The overall risk is Low. The tool functions by consuming a user-provided Bilibili session cookie (delegated auth) to make network requests directly to the platform's APIs. A light code audit across 12 files found no dangerous patterns, no hardcoded secrets, and no dangerous permission requests. The code relies strictly on the Python standard library (zero dependencies), meaning it does not silently execute shell commands or download hidden external packages.

From a data privacy standpoint, the tool processes potentially sensitive account cookies and scrapes user comments. However, the developers have explicitly included a strict "Responsible Use" section in the documentation. This guide advises users to analyze only their own videos, respect the built-in randomized throttling to prevent spam, and practice data minimization (fetch, analyze, delete).

Quality Assessment
The project appears to be highly functional and very recently maintained, with its last push occurring today. It has earned 12 GitHub stars, indicating a small but present level of community trust. The documentation is thorough and details compatibility across multiple AI agent platforms. However, it is missing a formal open-source license file. This is a notable drawback, as without a license, the code is technically proprietary and all rights are reserved by the creator, which may restrict corporate or commercial usage.

Verdict
Safe to use, but proceed with caution regarding the unlicensed code status and ensure you strictly follow the platform's Terms of Service regarding account cookies and data scraping.
SUMMARY

Agent-native CLI for fetching Bilibili video comments (UP主 self-analysis). Zero dependencies. Claude Code / Codex / OpenClaw support.

README.md

bbc-skill · Bilibili Comment Collector

中文文档 · Online Docs

Built for Bilibili UP主 (content creators): fetch every comment on your own
videos and feed them to Claude Code / Codex / Gemini / any agent for
sentiment / keyword / audience analysis.

  • 🐍 Zero dependencies — Python 3.9+ standard library only, no pip install
  • 💬 Complete — top-level + nested + pinned comments, nothing skipped
  • 📊 Video metadata — title, view/like/coin/favorite counts, tags included
  • 🤖 Agent-native CLI — stable stdout JSON envelope, NDJSON stderr
    progress, distinct exit codes, dry-run, schema introspection
  • 🧑‍🎤 Batch mode — fetch every video of a UP主 sequentially (one at a time,
    5-10s randomised cooldown between videos)
  • 🔐 Delegated auth — human logs in once, agent just consumes the cookie;
    no browser automation
  • ♻️ Resumable — re-running the same BV skips completed pages; --since
    for incremental monitoring
  • 📁 Analysis-friendlycomments.jsonl + summary.json + raw/ archive

Multi-Platform Support

Follows the Agent Skills spec. Works with every
major AI coding agent:

Platform Status Details
Claude Code ✅ Full support Native SKILL.md format
OpenAI Codex ✅ Full support agents/openai.yaml sidecar file
OpenClaw / ClawHub ✅ Full support metadata.openclaw namespace
Hermes Agent ✅ Full support metadata.hermes namespace
Opencode ✅ Full support Reads ~/.claude/skills/ automatically
SkillsMP ✅ Indexed GitHub topics configured

⚠️ Responsible Use

Please read and accept these guidelines before using this tool.

  • Personal, low-volume, legal use only: analyze comments on your own
    videos, or assist another creator with their explicit authorization.
  • Respect the built-in throttling: 1s per request, 5-10s random
    cooldown between videos in batch mode. Do not patch these out.
  • Do NOT use for:
    • Mass-scraping strangers' videos
    • Building derivative data products for resale or public redistribution
    • Bypassing rate limits, spoofing User-Agents, using proxy pools to evade
      anti-bot systems
    • High-frequency automation (e.g. daily full re-scans of the same channel)
    • Harassment, doxxing, coordinated attacks, or targeting specific users
  • 📜 Comply with Bilibili's ToS and robots.txt.
    For commercial/organization use, switch to the official
    Bilibili Open Platform APIs.
  • 🔒 Data minimization: fetch → analyze → delete. Do not retain raw data
    long-term, and do not share files containing user IDs or IP locations.
  • 🎯 The tool is designed for one-shot analyses, not long-term
    surveillance.

This project is not affiliated with bilibili.com. Any account-level risk
control, bans, or legal consequences are the user's responsibility. When
in doubt about whether a specific use case is allowed — don't run it.


Install

Claude Code

# Global install (available in all projects)
git clone https://github.com/Agents365-ai/bbc-skill.git ~/.claude/skills/bbc-skill

# Project-level install
git clone https://github.com/Agents365-ai/bbc-skill.git .claude/skills/bbc-skill

OpenAI Codex

git clone https://github.com/Agents365-ai/bbc-skill.git ~/.agents/skills/bbc-skill
# Project-level
git clone https://github.com/Agents365-ai/bbc-skill.git .agents/skills/bbc-skill

OpenClaw / ClawHub

# Via ClawHub
clawhub install bbc-skill

# Manual
git clone https://github.com/Agents365-ai/bbc-skill.git ~/.openclaw/skills/bbc-skill

Opencode

git clone https://github.com/Agents365-ai/bbc-skill.git ~/.config/opencode/skills/bbc-skill
# Or reuse an existing ~/.claude/skills/bbc-skill — Opencode reads that path too

Hermes Agent

git clone https://github.com/Agents365-ai/bbc-skill.git ~/.hermes/skills/data/bbc-skill

SkillsMP

skills install bbc-skill

Standalone CLI (no skill)

git clone https://github.com/Agents365-ai/bbc-skill.git && cd bbc-skill
./scripts/bbc --help
# Or add to PATH
export PATH="$PWD/scripts:$PATH"

Installation paths summary

Platform Global path Project path
Claude Code ~/.claude/skills/bbc-skill/ .claude/skills/bbc-skill/
OpenAI Codex ~/.agents/skills/bbc-skill/ .agents/skills/bbc-skill/
OpenClaw / ClawHub ~/.openclaw/skills/bbc-skill/ skills/bbc-skill/
Opencode ~/.config/opencode/skills/bbc-skill/ .opencode/skills/bbc-skill/
Hermes Agent ~/.hermes/skills/data/bbc-skill/ Via external_dirs config
SkillsMP N/A (installed via CLI) N/A

Quick start

Step 1 · Export your Bilibili cookie

Bilibili's comment API rate-limits and returns thin data for unauthenticated
requests. For full UP主 analysis you must authenticate with a cookie.

Recommended: the open-source Chrome extension
Get cookies.txt LOCALLY
— runs entirely locally, uploads nothing.

  1. Install Get cookies.txt LOCALLY from the Chrome Web Store.
  2. Visit https://www.bilibili.com and confirm you
    are logged in (avatar visible top-right).
  3. Click the extension icon → Export → download
    www.bilibili.com_cookies.txt.
  4. Save it somewhere convenient, e.g. ~/Downloads/bilibili_cookies.txt.

Other options:

  • Firefox: cookies.txt
    add-on.
  • Edge: the same Chrome extension works.
  • Manual: DevTools F12 → Application → Cookies → copy the SESSDATA value,
    then export BBC_SESSDATA="<value>".

Do not share SESSDATA — it authorizes full account access.

Step 2 · Verify the cookie works

./scripts/bbc cookie-check --cookie-file ~/Downloads/bilibili_cookies.txt

Expected:

{"ok": true, "data": {"mid": 441831884, "uname": "探索未至之境", "vip": true, "level": 5, ...}}

If it fails: confirm you are currently logged in at bilibili.com, re-export
the cookie, and retry.

Step 3 · Fetch comments

./scripts/bbc fetch BV1NjA7zjEAU \
  --cookie-file ~/Downloads/bilibili_cookies.txt

URLs are accepted too:

./scripts/bbc fetch "https://www.bilibili.com/video/BV1NjA7zjEAU/"

Output lives in ./bilibili-comments/BV1NjA7zjEAU/:

bilibili-comments/BV1NjA7zjEAU/
├── comments.jsonl      # flat JSONL — one comment per line
├── summary.json        # video meta + aggregated stats + top-N
├── raw/                # archived API responses
└── .bbc-state.json     # resume / incremental state

Environment variables

export BBC_COOKIE_FILE="$HOME/Downloads/bilibili_cookies.txt"
./scripts/bbc fetch BV1NjA7zjEAU

Or pass SESSDATA directly:

export BBC_SESSDATA="<value from DevTools>"
./scripts/bbc fetch BV1NjA7zjEAU

Analysis workflow with Claude Code

After fetch completes, ask Claude something like:

Read ./bilibili-comments/BV1NjA7zjEAU/summary.json first — give me the
overall picture: video stats, comment distribution, top 20 liked. Then
I'll direct what to analyze next.

Claude follows this path:

  1. Read summary.json first (a few KB) — video title, stats, time
    distribution, IP distribution, top-N comments.
  2. Sample comments.jsonl on demand — each line is a flat JSON record;
    Grep for keywords, head/tail for chronology, sort by like for
    hot-comment analysis.
  3. Typical analyses:
    • Sentiment: positive / negative / neutral ratio
    • Keyword frequency (excluding stopwords)
    • UP interaction audit: filter is_up_reply=true, see which threads you
      replied to vs. missed
    • Geographic breakdown from ip_location
    • Feedback evolution: bucket ctime_iso by week/month
    • Super-fan detection: group by mid, rank by comment count
    • Negative-review triage: high like + negative keywords

Commands

bbc fetch <BV|URL>

--max N                 Cap top-level comments (default: all)
--since <date>          Only fetch comments newer than this (ISO, e.g. 2026-04-01)
--output <dir>          Output directory (default ./bilibili-comments/<BV>/)
--cookie-file <path>    Netscape cookie file
--browser <name>        auto / firefox / chrome / edge / safari
--format json|table     stdout format
--dry-run               Preview request plan, no network calls
--force                 Ignore resume state, refetch everything

bbc fetch-user <UID> (coming soon)

Batch fetch across a UP主's entire video catalog.

bbc summarize <dir>

Rebuild summary.json from an existing comments.jsonl.

bbc cookie-check

Validate the cookie and print the logged-in user.

bbc schema [command]

Return JSON schema for a command (param types, exit codes, error codes).

Exit codes

Code Meaning
0 Success
1 Runtime / API error
2 Auth error (cookie invalid / missing)
3 Validation error (bad parameter)
4 Network error (timeout / retries exhausted)

Output schemas

comments.jsonl record

{
  "rpid": 296636680849,
  "bvid": "BV1NjA7zjEAU",
  "parent": 0,
  "root": 0,
  "mid": 71171081,
  "uname": "user nickname",
  "user_level": 4,
  "vip": false,
  "ctime": 1776521119,
  "ctime_iso": "2026-04-18T06:25:19+00:00",
  "message": "...",
  "like": 1,
  "rcount": 0,
  "ip_location": "河北",
  "is_up_reply": false,
  "top_type": 0,
  "mentioned_users": [],
  "jump_urls": []
}
  • parent=0 → top-level; otherwise rpid of the parent comment
  • top_type: 0=normal, 1=UP pinned, 2=editor pinned
  • is_up_reply: true if the comment was authored by the video owner

summary.json fields

  • video: title, description, stats, tags, cover URL, owner
  • counts: total, top-level, nested, pinned, unique users, UP replies,
    completeness ratio
  • time_distribution: earliest/latest timestamps, daily histogram
  • top_liked: top-N comments by like count
  • top_replied: top-N top-level comments by reply count
  • ip_distribution: histogram of IP provinces

See references/agent-contract.md for the full schema.


Limits & caveats

  • Read-only — never posts, edits, or deletes. Safety tier: open.
  • Rate — 1s between top-level requests, 0.5s for nested. ~5000 comments
    takes 10-15 minutes.
  • Anti-bot — HTTP 412 triggers exponential backoff (3 retries).
  • Completeness — the completeness field in summary.json compares
    fetched vs. declared counts; values below 1.0 indicate deleted comments
    or API inconsistency.
  • Anonymous not supported — UP主 analysis requires a valid cookie.

References


Contributing

Suggestions, bug reports, and pull requests are all welcome. If you have
ideas — new analysis workflows, better anti-bot defaults, additional
platform support, documentation fixes — feel free to
open an issue or
submit a PR directly.

This skill is community-friendly: every contribution, no matter how small,
helps make it better for everyone.


License

MIT


Support

If this skill helps you, consider supporting the author:

WeChat Pay
WeChat Pay
Alipay
Alipay
Buy Me a Coffee
Buy Me a Coffee

Author

Agents365-ai — building open-source skills for AI coding agents.

Yorumlar (0)

Sonuc bulunamadi