swarms
Health Uyari
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 5 GitHub stars
Code Uyari
- network request — Outbound network request in scripts/check-update.sh
Permissions Gecti
- Permissions — No dangerous permissions requested
This tool is a collaborative agent framework designed for Claude Code. It structures AI sessions into repeatable workflows by deploying "swarms" of agents that research, execute, and review tasks to produce high-quality outcomes.
Security Assessment
The overall risk is Low. The tool does not request any dangerous system permissions and contains no hardcoded secrets. As a framework that orchestrates AI agents, it inherently executes commands and interacts with your local project files. The automated scan flagged an outbound network request, but this is limited to a benign script that checks for software updates. Users should be aware that the AI agents themselves might execute shell commands or modify code depending on the specific task they are given.
Quality Assessment
The project is actively maintained, with its most recent push occurring today. It is protected under the standard MIT license and features clear, comprehensive documentation. However, community trust and visibility are currently minimal. The repository has only 5 GitHub stars, indicating that very few external developers have reviewed or tested the code. It is essentially an early-stage personal project recently opened to the public.
Verdict
Use with caution; the code itself appears safe and well-documented, but its early stage and lack of widespread community review warrant a careful watch over what the AI agents do during your sessions.
Achieve extraordinary results with claude code across a variety of tasks
Swarm
Good Claude Code sessions are repeatable. Swarm is the structure that makes them that way.
I built this across hundreds of sessions, pruning rules, memories, and skills until the quality stopped varying. When model quality shifted, small targeted changes kept it working, even on smaller models. Once the results were consistent enough to rely on, I started sharing with teammates and friends.
Prerequisites
Agent teams must be enabled in Claude Code. Add to ~/.claude/settings.json:
{
"env": {
"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
}
}
The launch command checks this automatically and guides you through enablement if it's missing.
Installation
claude plugin marketplace add DheerG/swarms
claude plugin install swarm@swarms --scope project
For development or local testing:
claude --plugin-dir /path/to/swarms
Changes take effect in the next session.
Usage
Launch a team
/swarm:launch
Describe your outcomes, choose defaults or configure each step, review the plan, then launch. Pass outcomes inline to skip the opening question:
/swarm:write Help me write a blog article on healing traumas
Mode shortcuts
/swarm:code # Code team
/swarm:write # Writing team
/swarm:general # General team
Modes
| Mode | Lead | Facilitator | Review |
|---|---|---|---|
| Code | Writes code | Principal Engineer | Technical review |
| Writing | Coordinates (can write) | Editorial Director | Editor-sandwich review |
| General | Produces deliverable | Chief of Staff | Facilitator-driven review |
Team shapes
| Shape | Members | Facilitator |
|---|---|---|
| Balanced (default) | Sonnet | Opus |
| Ultra | Opus | Opus |
Other skills
/swarm:audit-context # Check your project for artifacts that may interfere with swarm
/swarm:refine-outcomes # Reframe ideas into outcome statements
/swarm:suggest-members # Recommend team composition
How it works
Every team follows the same phase arc: Research, Converge, Approve, Execute, Review, Refine, Deliver. Phases run in order.
Review is recursive: if the team doesn't reach a 9/10 confidence score, it cycles back through Refine before presenting to you. The facilitator drives this, not the lead. Work reaches you only after the team has agreed it clears the threshold. Most of the quality work happens in the cycles you never see.
Watching your team
The most useful signal isn't the final output. It's the discussion. Watching agents reason, push back, and converge tells you whether the team is actually working. AgentChat is a companion tool that surfaces agent conversations in real time. You'll likely find yourself watching the discussion more than Claude Code itself.
Custom workflows
Run /swarm:create-workflow to scaffold a custom mode for your project: domain-specific phases plus a shortcut command. Use /swarm:workflow <mode-name> to launch against any existing custom mode.
Updating
Run /swarm:update to check for and install the latest version.
claude plugin marketplace update swarms
claude plugin update swarm@swarms --scope project
Ubiquitous Language
| Term | Meaning |
|---|---|
| swarm | The plugin |
| team | A group of agents launched for a task |
| lead | The main session that coordinates work |
| member | A teammate agent (read-only) |
| facilitator | The Socratic facilitator role (Principal Engineer / Editorial Director / Chief of Staff) |
| outcome | What the user wants to achieve |
| mode | The team's domain configuration (Code, Writing, General) |
| shape | Model allocation tier (Balanced, Ultra) |
| phase arc | Research, Converge, Approve, Execute, Review, Deliver |
| launch | Start a team via /swarm:launch or a mode shortcut |
Contributing
All changes go through branches and pull requests. Automated version bumps by github-actions[bot] are the only exception.
License
MIT
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi