DSPy-Programming-not-prompting-LMs-skills

agent
Guvenlik Denetimi
Uyari
Health Uyari
  • No license — Repository has no license file
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 5 GitHub stars
Code Gecti
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
This tool provides a collection of AI skills and commands for coding agents like Claude Code. It helps developers scaffold and build reliable AI features, such as RAG pipelines, text classifiers, and parsers, using the DSPy framework instead of manually written prompts.

Security Assessment
Overall Risk: Low. The automated code scan reviewed 12 files and found no dangerous patterns or hardcoded secrets. The tool does not request any dangerous system permissions. Since it is designed to generate code and project structures for AI features, it will naturally help you write scripts that make network requests or access databases, but the tool itself acts as an assistant rather than executing arbitrary shell commands or accessing sensitive data directly.

Quality Assessment
The project is actively maintained, with its most recent code push happening today. The README is well-documented, clearly outlining a wide variety of use cases and instructions to help developers get started immediately. However, there are two notable concerns regarding maturity and community trust. First, the repository lacks a license file, which means the legal terms for using, modifying, or distributing the code are undefined and restrictive by default. Second, the project has very low visibility with only 5 GitHub stars, indicating a minimal user base and limited peer review from the open-source community.

Verdict
Use with caution: The codebase appears safe and clean, but you should be aware of its limited community testing and the lack of an explicit open-source license.
SUMMARY

AI skills for Claude Code, Cursor, and other coding agents. Build reliable AI features with DSPy — classification, RAG, parsing, agents, and more. Just type /ai-do.

README.md

AI Skills for Claude Code

Build reliable AI features. Powered by DSPy — a framework that lets you program language models with composable modules instead of hand-writing prompts.

Quick start

The only skill you need out the repo is /ai-do. Describe what you want to build and it tells you which skill to use next.

npx skills add lebsral/DSPy-Programming-not-prompting-LMs-skills --skill ai-do

Then in Claude Code:

/ai-do I want to build a support ticket classifier

It picks the right skill, generates a ready-to-run prompt, and tells you what to install.

What problem are you solving?

Your problem Skill What it does
"I want to build something with AI but not sure where to start" /ai-do Describe your problem, get routed to the right skill with a ready-to-use prompt
"I'm starting a new AI feature" / "DSPy quickstart" /ai-kickoff Scaffold a complete AI project with the right structure
"I need a multi-phase plan for my AI feature" / "what order should I build this in" /ai-planning Create a phased roadmap before you start building
"Which DSPy module should I use?" / "Predict vs ChainOfThought" /ai-choosing-architecture Pick the right DSPy pattern and architecture for your task
"I need to auto-sort/tag/categorize content" / "classification results are inconsistent" /ai-sorting Build AI that sorts tickets, tags emails, detects sentiment
"I need to search docs and answer questions" / "retrieval returns irrelevant results" / "RAG pipeline tutorial" /ai-searching-docs Build AI-powered knowledge base, help center, or doc Q&A
"I need AI to answer questions about our database" / "chat with your Postgres" /ai-querying-databases Text-to-SQL: plain English questions over Postgres, MySQL, Snowflake
"I need to condense long content into summaries" /ai-summarizing Summarize meetings, articles, threads — with length control
"I need to pull structured data from messy text" / "the emails are messy and lack structure" /ai-parsing-data Parse invoices, extract entities, convert text to JSON
"I need AI to take actions and call APIs" / "LLM function calling" /ai-taking-actions Build AI that calls APIs, uses tools, and completes tasks
"I need AI to write articles, reports, or copy" /ai-writing-content Generate blog posts, product descriptions, newsletters
"My AI fails on hard problems that need planning" / "LLM can't do multi-step logic" /ai-reasoning Add multi-step reasoning, Self-Discovery, chain-of-thought
"My task needs multiple AI steps" / "LangChain LCEL alternative" /ai-building-pipelines Chain classify, retrieve, generate, verify into one pipeline
"I need to verify AI output before users see it" / "LLMs invent data points" /ai-checking-outputs Add guardrails, fact-checking, safety filters, and quality gates
"My AI makes stuff up / hallucinates" / "LLM makes up facts" /ai-stopping-hallucinations Ground AI in facts with citations, verification, and source checking
"My AI doesn't follow our rules" / "LLM JSON output is unreliable" /ai-following-rules Enforce content policies, format rules, and business constraints
"Review my DSPy code" / "is my AI code correct" / "best practices check" /ai-auditing-code Audit DSPy code for correctness, anti-patterns, and best practices
"My AI gives wrong answers" / "I spent hours tweaking prompts" /ai-improving-accuracy Measure quality, then systematically improve it
"My AI gives different answers every time" / "same prompt, different results every run" /ai-making-consistent Lock down outputs so they're predictable and reliable
"My AI is too expensive" / "LLM API costs too high" / "GPT-4 costs too much" /ai-cutting-costs Reduce API costs with smart routing, caching, fine-tuning
"Let's fine-tune on our data" / "prompt optimization hit a ceiling" /ai-fine-tuning Train models on your data for max quality or cost savings
"Can we switch to a different model?" / "prompt broke after model update" /ai-switching-models Switch providers, compare models, re-optimize automatically
"We don't have enough training data" / "no labeled data, need to bootstrap" /ai-generating-data Generate synthetic examples, fill data gaps, bootstrap from scratch
"How do I put my AI behind an API?" / "deploy LLM as API" / "productionize my AI" /ai-serving-apis Wrap your AI in FastAPI endpoints for production serving
"Is our AI safe to launch?" / "prevent prompt injection" /ai-testing-safety Automatically find vulnerabilities with adversarial testing
"We need to moderate user content" /ai-moderating-content Build AI content moderation with severity levels and routing
"Is our AI still working in production?" / "silent quality drops, prompt drift" /ai-monitoring Monitor quality, safety, and cost — catch degradation early
"Why did my AI give that wrong answer?" /ai-tracing-requests Trace individual requests — see every LM call, retrieval, and step
"Which of our optimization experiments was best?" /ai-tracking-experiments Log, compare, and promote optimization runs
"I need AI to score, grade, or evaluate things" / "LLM as a judge" /ai-scoring Score essays, audit support quality, rate code reviews against rubrics
"My AI works on simple inputs but fails on complex ones" / "works on simple inputs but fails on complex ones" /ai-decomposing-tasks Break unreliable single-step tasks into reliable subtasks
"I need a conversational AI assistant" / "how do I build a chatbot" / "Intercom bot alternative" /ai-building-chatbots Build chatbots with memory, state, and doc-grounded responses
"I need multiple AI agents working together" / "CrewAI alternative" /ai-coordinating-agents Supervisor agents, specialist handoff, parallel research teams
"I need to translate content to other languages" / "localize our app" / "i18n with AI" /ai-translating-content Translate with glossary enforcement, batch i18n, brand voice
"I need product recommendations" / "you might also like" / "personalize the feed" /ai-recommending Retrieval + LM re-ranking for personalized recommendations
"I need to strip PII before sending to an LLM" / "GDPR compliance" / "anonymize data" /ai-redacting-data Detect and replace PII with regex + LM pipeline
"I need to deduplicate contacts" / "entity resolution" / "merge records" /ai-matching-records Block, score pairs, merge duplicates across datasets
"I need to normalize messy data" / "standardize company names" / "fix inconsistent formats" /ai-cleaning-data AI-powered data normalization with rule inference
"I need to detect fraud or anomalies" / "flag suspicious transactions" / "abuse detection" /ai-detecting-anomalies Score events against baselines with severity and explanations
"I need smart notification messages" / "weekly digest" / "incident alerts from logs" /ai-generating-notifications Event-driven notifications with channel constraints and digests
"I need to analyze images" / "extract text from screenshots" / "generate alt text" /ai-understanding-images Vision model pipelines with dspy.Image for structured extraction
"I need to rewrite text in a different tone" / "simplify legal language" / "adapt for different audience" /ai-rewriting-text Tone, reading level, and audience adaptation with fidelity checks
"My AI is broken/erroring" / "Could not parse LLM output" /ai-fixing-errors Diagnose and fix crashes, wrong outputs, and weird behavior
"DSPy can do X but there's no skill for it" /ai-request-skill Build a missing skill and submit a PR, or file a GitHub issue requesting it

Using a specific tool with DSPy?

Tool Skill What it covers
VizPy (prompt optimizer) /dspy-vizpy Drop-in ContraPrompt/PromptGrad optimizers as alternative to GEPA/MIPROv2
Langtrace /dspy-langtrace Auto-instrument DSPy with one line, cloud + self-hosted tracing
Arize Phoenix /dspy-phoenix Open-source trace viewer with built-in evals, local UI at localhost:6006
W&B Weave /dspy-weave Cloud experiment tracking and team dashboards via @weave.op() decorator
MLflow /dspy-mlflow Auto-tracing, experiment tracking, and model registry for DSPy
LangWatch /dspy-langwatch Auto-tracing + real-time optimizer progress dashboard
Langfuse /dspy-langfuse Tracing + scoring + annotation queues + experiment tracking
Ragas /dspy-ragas Decomposed RAG evaluation: faithfulness, context precision/recall
Qdrant /dspy-qdrant Official vector DB integration + custom retriever pattern for any DB
Ollama /dspy-ollama Run DSPy with local models, no API key needed
vLLM /dspy-vllm High-throughput production serving for self-hosted models

Know which DSPy concept you need?

If you already know DSPy and think in its vocabulary, use these API-first skills instead:

DSPy concept Skill What it covers
Signature, InputField, OutputField /dspy-signatures Inline and class-based signatures, typed fields, Pydantic models
dspy.LM, dspy.configure /dspy-lm Provider strings, temperature/max_tokens, per-module LM assignment
dspy.Assert, dspy.Suggest /dspy-assertions REMOVED in DSPy 3.x — legacy docs only. Use /dspy-refine or /dspy-best-of-n instead
dspy.Module, forward() /dspy-modules Custom modules, composing sub-modules, save/load state
dspy.Example, Prediction /dspy-data with_inputs(), train/dev splits, loading from CSV/JSON/HuggingFace
dspy.Evaluate, metrics /dspy-evaluate SemanticF1, exact match, LM-as-judge, composite metrics
dspy.Predict /dspy-predict Direct LM calls, simplest inference module
dspy.ChainOfThought /dspy-chain-of-thought Step-by-step reasoning, reasoning field
dspy.ProgramOfThought /dspy-program-of-thought Code generation + execution for math/computation
dspy.ReAct /dspy-react Tool-using agents, Reasoning-Action-Observation loop
dspy.CodeAct /dspy-codeact Agents that write and execute code to act
dspy.MultiChainComparison /dspy-multi-chain-comparison Multiple reasoning chains, pick the best
dspy.BestOfN /dspy-best-of-n Rejection sampling with a reward function
dspy.Parallel /dspy-parallel Concurrent LM calls, batch processing
dspy.Refine /dspy-refine Iterative self-improvement with feedback
dspy.RLM /dspy-rlm Reinforcement-learning-style refinement
dspy.BootstrapFewShot /dspy-bootstrap-few-shot Auto-generate few-shot demos, first optimizer to try
dspy.BootstrapFewShotWithRandomSearch /dspy-bootstrap-rs Random search over candidate demo sets
dspy.MIPROv2 /dspy-miprov2 Best prompt optimizer, instructions + demos jointly
dspy.GEPA /dspy-gepa Instruction generation and selection
dspy.BetterTogether /dspy-better-together Combined prompt + weight tuning
dspy.BootstrapFinetune /dspy-bootstrap-finetune Fine-tune weights from bootstrapped data
dspy.COPRO /dspy-copro Instruction candidates with breadth search
dspy.Ensemble /dspy-ensemble Combine multiple optimized programs
dspy.InferRules /dspy-infer-rules Extract decision logic from examples
dspy.KNN, dspy.KNNFewShot /dspy-knn-few-shot Embedding-based demo retrieval
dspy.LabeledFewShot /dspy-labeled-few-shot Hand-picked demonstrations
dspy.SIMBA /dspy-simba Small-step incremental optimization
ChatAdapter, JSONAdapter, TwoStepAdapter /dspy-adapters Prompt formatting, structured output (see also /dspy-two-step-adapter)
dspy.ChatAdapter (deep dive) /dspy-chatadapter Field delimiters, parse/format internals, JSON fallback, fine-tuning data
dspy.Tool, PythonInterpreter /dspy-tools Wrapping functions as tools, code execution
dspy.Retrieve, ColBERTv2, Embedder /dspy-retrieval Search, RAG pipelines, embeddings
dspy.Image, dspy.Audio, dspy.Code, dspy.History /dspy-primitives Multimodal inputs, conversation history
dspy.streamify, StreamListener, StreamResponse /dspy-streaming Real-time token streaming to frontends, SSE, WebSocket
dspy.Tool.from_mcp_tool, MCP servers /dspy-mcp Connect agents to MCP tool servers
dspy.experimental.Citations, Document /dspy-citations Structured source attribution for RAG
dspy.TwoStepAdapter /dspy-two-step-adapter Reasoning models (o1, o3, DeepSeek-R1) structured output
aforward(), acall(), async patterns /dspy-async Async execution, FastAPI, concurrent calls
inspect_history, save/load, configure_cache /dspy-utils Caching, debugging, persistence
VizPy (ContraPromptOptimizer, PromptGradOptimizer) /dspy-vizpy Commercial drop-in prompt optimizer, alternative to GEPA
Langtrace (langtrace.init) /dspy-langtrace Auto-instrument DSPy, cloud + self-hosted LLM observability
Arize Phoenix (DSPyInstrumentor) /dspy-phoenix Open-source trace viewer with evals, local UI
W&B Weave (@weave.op()) /dspy-weave Cloud experiment tracking and team dashboards
MLflow (mlflow.dspy.autolog()) /dspy-mlflow Auto-tracing, experiment tracking, model registry
LangWatch (langwatch.dspy.init) /dspy-langwatch Auto-tracing and real-time optimizer progress
Langfuse (DSPyInstrumentor, @observe) /dspy-langfuse Tracing + scoring + annotation queues + experiments
Ragas (ragas.evaluate) /dspy-ragas Decomposed RAG evaluation with LLM-as-judge metrics
Qdrant (QdrantRM) /dspy-qdrant Vector DB retriever with hybrid search
Ollama (ollama_chat/) /dspy-ollama Local model serving for development
vLLM (openai/ + local server) /dspy-vllm Production self-hosted model serving

Install

Option 1: npx skills (recommended — works with any AI coding agent)

Install all 94 skills in one command. Works with Claude Code, Cursor, Codex, Cline, Windsurf, and 35+ other agents.

npx skills add lebsral/DSPy-Programming-not-prompting-LMs-skills

The CLI will prompt you to pick which skills and which agents to install. Or install everything non-interactively:

npx skills add lebsral/DSPy-Programming-not-prompting-LMs-skills --all -y

To install globally (available in all your projects):

npx skills add lebsral/DSPy-Programming-not-prompting-LMs-skills -g

Option 2: Claude Code plugin marketplace

# In Claude Code, run:
/plugin marketplace add lebsral/DSPy-Programming-not-prompting-LMs-skills

Then install a skill group:

/plugin install dspy-build-skills@dspy-skills     # Building AI features (17 skills)
/plugin install dspy-quality-skills@dspy-skills    # Quality and reliability (8 skills)
/plugin install dspy-ops-skills@dspy-skills        # Production operations (7 skills)
/plugin install dspy-api-skills@dspy-skills        # DSPy API-first skills (37 skills)

Option 3: Manual (git clone)

git clone https://github.com/lebsral/DSPy-Programming-not-prompting-LMs-skills.git

Copy skills to your agent's skill directory:

# Claude Code
cp -r DSPy-Programming-not-prompting-LMs-skills/skills/* ~/.claude/skills/

# Cursor
cp -r DSPy-Programming-not-prompting-LMs-skills/skills/* ~/.cursor/skills/

Or symlink to stay in sync with updates:

ln -s "$(pwd)/DSPy-Programming-not-prompting-LMs-skills/skills/"* ~/.claude/skills/

Option 4: Slash command (just /ai-do, no dependencies)

If you only want the router and prefer Claude Code's built-in slash command system:

curl -fsSL https://raw.githubusercontent.com/lebsral/DSPy-Programming-not-prompting-LMs-skills/main/commands/ai-do.md \
  -o ~/.claude/commands/ai-do.md

This gives you /ai-do globally in Claude Code. It routes you to the right skill and tells you how to install it. No npx, no plugins — just one file.

Managing skills

npx skills list          # See what you have installed
npx skills check         # Check for updates
npx skills update        # Update all installed skills
npx skills remove        # Uninstall skills

Keeping skills up to date

These skills are actively improved. To get the latest versions:

npx skills check          # See what's changed
npx skills update         # Pull latest versions

For manual/symlink installs, run git pull in your cloned repo.

Migrating from v1.12.1 or earlier? A YAML formatting change in v1.12.2 means npx skills update may fail on some skills. Reinstall to fix:

npx skills remove dspy-skills
npx skills add lebsral/DSPy-Programming-not-prompting-LMs-skills -g -s '*'

Adding new skills after initial install

npx skills update only updates skills you already have. When new skills are added to this repo (like the dspy- API-first skills), re-run the add command to pick them up:

# Interactive — choose which new skills to add
npx skills add lebsral/DSPy-Programming-not-prompting-LMs-skills

# Non-interactive — add all new skills automatically
npx skills add lebsral/DSPy-Programming-not-prompting-LMs-skills --all -y

This won't duplicate skills you already have — it only adds the ones that are missing.

Use a skill

In Claude Code (or any agent that supports the Agent Skills standard):

  • Invoke directly: /ai-sorting or /ai-kickoff my-project
  • Ask naturally: "Help me sort support tickets into categories" — the agent picks the right skill

How It Works

Each skill is a directory under skills/ containing:

  • SKILL.md — Main instructions Claude follows (YAML frontmatter + markdown)
  • examples.md — Worked examples (loaded on demand)
  • reference.md — Detailed reference material (loaded on demand)

Under the hood, skills use DSPy — a framework for building AI features with composable modules that compile into optimized prompts. You don't need to know DSPy to use these skills; they guide you through everything.

Skills follow the Claude Code skills format and the Agent Skills open standard.

Reference Docs

Contributing

Adding a new skill

  1. Create skills/ai-<problem>/SKILL.md — name it after the problem, not the DSPy concept
  2. Add YAML frontmatter with name and description (include phrases users would naturally say)
  3. Write step-by-step instructions in the markdown body
  4. Add examples.md and/or reference.md for supporting content
  5. Update the problem catalog table in this README
  6. Test with /ai-<problem> in Claude Code
  7. Bump the patch version in .claude-plugin/marketplace.json

See docs/skills-spec.md for the full skill format specification.

Links

Yorumlar (0)

Sonuc bulunamadi