claude-skills
Health Pass
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 16 GitHub stars
Code Fail
- eval() — Dynamic code execution via eval() in deslop/scripts/scan_patterns.sh
- exec() — Shell command execution in deslop/scripts/scan_patterns.sh
- process.env — Environment variable access in deslop/scripts/scan_patterns.sh
Permissions Pass
- Permissions — No dangerous permissions requested
This project provides a collection of prompt-based "skills" designed to extend the capabilities of Claude Code. It focuses on helping developers audit, refine, and safely deploy AI-generated codebases.
Security Assessment
Overall Risk: Medium. The tool operates primarily as a set of instructional Markdown files, meaning it contains no traditional application code. However, the automated rule-based scan flagged dynamic execution and shell command warnings in a script directory (`deslop/scripts/scan_patterns.sh`). While the installation process consists of standard file copying, the skills themselves are designed to instruct the AI to perform deep system scans, read local files, and execute tasks. The `people-sourcer` skill specifically directs the AI to scrape external websites, which poses inherent network and behavioral risks depending on the user's environment.
Quality Assessment
The project is in active development, with its last push occurring today. It is licensed under the permissive and standard MIT license, which is excellent for open-source collaboration. Current community trust is quite low, indicated by only 16 GitHub stars. The straightforward README and simple installation method demonstrate good developer experience and basic structural health.
Verdict
Use with caution: while the repository itself is just a collection of safe text files that are easy to review, the tasks it instructs your AI to perform (like shell execution and external web scraping) warrant careful human oversight before deployment.
Battle-tested Claude Code skills for hardening vibe-coded projects. Audit, clean, and ship AI-generated codebases with confidence.
Claude Skills
Personally curated, self-tested skills for Claude Code — an experimental attempt to make vibe coding actually useful.
Every skill here is a top-level directory containing a SKILL.md. That's the whole convention. Drop a folder in, follow that shape, and the installer below picks it up automatically — no list to update, no manifest to edit.
Install everything (one command)
Run from inside this repo. Both commands install every top-level directory that contains a SKILL.md into ~/.claude/skills/. Re-running upgrades in place. Add a new skill tomorrow that follows the same structure → re-run the same command, it just works.
Linux / macOS (bash, zsh):
mkdir -p "$HOME/.claude/skills" && for d in */; do [ -f "${d}SKILL.md" ] || continue; rm -rf "$HOME/.claude/skills/${d%/}"; cp -R "$d" "$HOME/.claude/skills/"; done
Windows (PowerShell):
$dst = "$HOME\.claude\skills"; New-Item -ItemType Directory -Force -Path $dst | Out-Null; Get-ChildItem -Directory | Where-Object { Test-Path (Join-Path $_.FullName 'SKILL.md') } | ForEach-Object { $t = Join-Path $dst $_.Name; if (Test-Path $t) { Remove-Item -Recurse -Force $t }; Copy-Item -Recurse -Force -Path $_.FullName -Destination $dst }
Want one skill instead of all of them? Just cp -R <skill-name>/ ~/.claude/skills/.
What each skill does
| Skill | One-liner |
|---|---|
deslop |
Audit and harden AI-generated codebases. Two-phase workflow: structured multi-pass AUDIT.md, then safety-tiered fixes. Never touches business logic. |
autonomous-research |
Reads files in your active directory, runs exhaustive multi-round literature research, self-critiques in loops, and produces a publication-quality PDF. Built for "find the gap, write the thing." |
people-sourcer |
Builds real prospect / candidate / outreach lists. Iterative scraping across LinkedIn, Reddit, X, Instagram, TikTok, YouTube, GitHub. Per-person commentary, not generic blurbs. Outputs a multi-sheet xlsx. |
pro-graphic-designer |
End-to-end graphic design — posters, carousels, banners, thumbnails, decks, ad creatives. Audience research → reference mining (Behance / Pinterest / Dribbble) → copy → output as Canva / HTML / SVG / PDF. |
worldbuilder-writing |
Treats writing as applied psychology, not self-expression. The reusable engine for any blog post, email, pitch, script, landing page, or sales copy. |
academic-paper |
Format text as a publication-ready PDF using reportlab — title block, sectioning, tables, figures, references. White-papers, preprints, lit reviews. |
skill-creator |
Meta-skill: create new skills, edit existing ones, run evals, tune description fields for better triggering. |
consolidate-memory |
Reflective pass over your CLAUDE.md / memory directory — merges duplicates, prunes stale facts, fixes the index. |
docx |
Read / edit / create Word documents. Tables of contents, headings, page numbers, tracked changes, comments, image insertion, find-replace. |
pdf |
Read text + tables, merge / split, rotate, watermark, fill forms, encrypt / decrypt, OCR scanned pages, extract images. |
pptx |
Read / edit / create PowerPoint decks. Templates, layouts, speaker notes, comments, combine / split. |
xlsx |
Read / edit / create spreadsheets. Formulas, formatting, charts, cleanup of malformed tabular data. |
schedule |
Create a scheduled task that runs on demand or on an interval. |
setup-cowork |
Guided Cowork onboarding — install role-matched plugins, connect tools, try a skill. |
How they connect
Some skills explicitly read another skill's SKILL.md mid-run. If you cherry-pick rather than installing the whole set, install the dependencies too — those skills degrade silently without them.
autonomous-research ──▶ academic-paper
people-sourcer ─┬─▶ worldbuilder-writing
├─▶ xlsx
└─▶ pro-graphic-designer ──▶ worldbuilder-writing
| Skill | Depends on | Why |
|---|---|---|
autonomous-research |
academic-paper |
Final deliverable is a formatted PDF. autonomous-research literally Reads academic-paper before writing, then follows its reportlab workflow. |
people-sourcer |
worldbuilder-writing |
Phase 0 (audience modeling) and the per-person commentary step both delegate to it. The skill explicitly states "this skill depends on it." |
people-sourcer |
xlsx |
Phase 6 output is a multi-sheet xlsx. Read before generating. |
people-sourcer |
pro-graphic-designer |
Architectural sibling — same scratchpad-driven, iterative-scraping shape. Cross-referenced for shared patterns. |
pro-graphic-designer |
worldbuilder-writing |
Phase 0 (audience model) and Phase 4 (copy) both run through it. |
worldbuilder-writing is the most-depended-on node — install it first if you're picking and choosing. The installer above grabs everything in one shot, so this only matters for cherry-pickers.
Why a BrightData token is required (for three of the skills)
Three skills are not "search the web a couple of times" skills — they are scraping pipelines:
autonomous-research— multi-round structured scraping of search engines, papers, and arbitrary websites for its literature sweep.people-sourcer— ~40 calls per run across LinkedIn / Reddit / X / Instagram / TikTok / YouTube / GitHub to discover, dedupe, and enrich named individuals.pro-graphic-designer— Reddit / LinkedIn / Facebook / Instagram / research papers for audience signal, plus Behance / Pinterest / Dribbble for visual references.
All three load BrightData MCP tools at runtime via tool_search — search_engine, scrape_as_markdown, scrape_batch, and the platform-specific web_data_* extractors (web_data_linkedin_person_profile, web_data_reddit_posts, etc.). Those tools authenticate against your BrightData account using an API token.
Without the token:
- The
tool_searchcalls return tools that fail at first invocation. - Plain
WebSearch+WebFetchcannot substitute. Most target platforms (LinkedIn, Instagram, TikTok, paywalled news, Behance) either block direct fetches, return JS-only shells, or rate-limit aggressively. BrightData's residential / unblocker layer is exactly what gets you past that — and the structuredweb_data_*endpoints return clean JSON instead of a brittle DOM scrape. - The skills' iteration loops (round 1 broad → round 2 deep → enrichment) collapse to round 1 and the output is shallow.
The other eleven skills don't need it. If you only run docx, pdf, pptx, xlsx, worldbuilder-writing, academic-paper, consolidate-memory, schedule, setup-cowork, skill-creator, or deslop, you can skip BrightData entirely.
Setup: add the BrightData MCP server to your Claude config with your API token. The token belongs to you — never paste it into a SKILL.md or commit it to this repo.
Adding your own skill
- Create a top-level folder:
my-skill/. - Inside it, write a
SKILL.mdwith frontmatter (name,description) and the body. - Add any
scripts/,references/,assets/,examples/it needs alongsideSKILL.md. - Re-run the install one-liner. Done.
The installer detects skills by the presence of SKILL.md, so anything in this repo without one (LICENSE, README.md, .git/) is left alone.
Contributing
Open PRs welcome — new skills, improvements, fixes.
License
MIT
Contact
Karan Prasad — [email protected]
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found