phero
Health Gecti
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 29 GitHub stars
Code Gecti
- Code scan — Scanned 2 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
This tool is a Go framework designed for building and orchestrating multi-agent AI systems. It provides composable building blocks to coordinate specialized AI agents, including features like RAG, memory management, and MCP integration.
Security Assessment
Overall risk: Medium. As an orchestration framework, it is inherently designed to connect to external network resources, such as LLM APIs (OpenAI, Anthropic), vector databases, and remote A2A agents. The documentation explicitly mentions tool guardrails, including bash execution blocklists and safe-mode options, which implies the capability to execute local shell commands if configured to do so. The automated code scan found no hardcoded secrets or dangerous patterns in the evaluated files, and the tool requests no inherently dangerous system permissions.
Quality Assessment
The project is actively maintained, with its most recent code push occurring today. It uses the permissive Apache-2.0 license, making it suitable for most development projects. While it offers a comprehensive feature set, it is still an early-stage project with a relatively small community footprint (29 GitHub stars). The codebase benefits from continuous integration checks and a clean Go Report Card.
Verdict
Use with caution; the framework is clean and recently updated, but its early-stage nature and ability to execute commands via agent tools warrant a thorough review of your specific configuration before deploying in sensitive environments.
A modern Go framework for building multi-agent AI systems.

🐜 Phero
The chemical language of AI agents.
Phero is a modern Go framework for building multi-agent AI systems. Like ants in a colony, agents in Phero cooperate, communicate, and coordinate toward shared goals, each with specialized roles, working together through a clean, composable architecture.
Why Phero?
- 🤝 Agent orchestration Multi-agent workflows with role specialization, coordination, and agent handoffs
- 🧩 Composable primitives Small, focused packages that solve specific problems
- 🔧 Tool-first design Built-in support for function tools, skills, RAG, and MCP
- 🎨 Developer-friendly Clean APIs, opt-in tracing, OpenAI-compatible + Anthropic support
- 🪶 Lightweight No heavy dependencies; just Go and your choice of LLM provider
Features
Core Capabilities
- 🤝 Agent orchestration Multi-agent workflows with role specialization, coordination, and runtime handoffs
- 🔀 Agent handoffs Transfer control between agents at runtime;
Result.HandoffAgenttells you where to route next - 🌐 A2A protocol Expose any agent as an HTTP A2A server, or call remote A2A agents as local tools
- 🧩 LLM abstraction Work with OpenAI-compatible endpoints (OpenAI, Ollama, etc.) and Anthropic
- 🛠️ Function tools Expose Go functions as callable tools with automatic JSON Schema generation
- 📚 RAG (Retrieval-Augmented Generation) Built-in vector storage and semantic search
- 🧠 Skills system Define reusable agent capabilities in
SKILL.mdfiles - 🔌 MCP support Integrate Model Context Protocol servers as agent tools
- 🧾 Memory management Conversational context storage for agents
- 🔍 Tracing Typed lifecycle events with a colorized text tracer (
trace/text) and an NDJSON file tracer (trace/jsonfile); per-run summary with token usage and latency breakdowns - 🛡️ Tool guardrails Bash tool blocklist, allowlist, timeout, and safe-mode options
- ✂️ Text splitting Recursive and Markdown-aware chunkers under
textsplitter/recursiveandtextsplitter/markdown - 🧬 Embeddings Semantic search capabilities via OpenAI embeddings
- 🗄️ Vector stores Qdrant, PostgreSQL/pgvector, and Weaviate backends
Requirements
- Go 1.25.5 or later
- An LLM provider (OpenAI / Ollama / OpenAI-compatible endpoint, or Anthropic)
Quick Start
Start with the Simple Agent example to learn the basics in ~100 lines of code.
Then try:
- Conversational Agent a multi-turn REPL chatbot with short-term memory
- Long-Term Memory semantic (RAG) memory backed by Qdrant
Then explore the examples/ directory for more advanced patterns:
- Multi-agent workflows
- RAG chatbots
- Skills integration
- MCP server connections
Some examples require extra services (e.g. Qdrant for vector search).
Architecture
Phero is organized into focused packages, each solving a specific problem:
🤖 Agent Layer
agentCore orchestration for LLM-based agents with tool execution, chat loops, and runtime handoffsmemoryConversational context management for multi-turn interactions (in-process, file-backed, RAG-backed, or PostgreSQL-backed)
💬 LLM Layer
llmClean LLM interface with function tool support and JSON Schema utilitiesllm/openaiOpenAI-compatible client (works with OpenAI, Ollama, and compatible endpoints)llm/anthropicAnthropic API client
🧠 Knowledge Layer
embeddingEmbedding interface for semantic operationsembedding/openaiOpenAI embeddings implementationvectorstoreVector storage interface for similarity searchvectorstore/qdrantQdrant vector database integrationvectorstore/psqlPostgreSQL + pgvector integrationvectorstore/weaviateWeaviate vector database integrationtextsplitterText splitting interface and shared typestextsplitter/recursiveRecursive character-based chunkertextsplitter/markdownMarkdown-aware chunker (heading-first separators)ragComplete RAG pipeline combining embeddings and vector stores
🔧 Tools & Integration
skillParse SKILL.md files and expose them as agent capabilitiesmcpModel Context Protocol adapter for external tool integrationa2aAgent-to-Agent (A2A) protocol — expose agents as HTTP servers or call remote agents as toolstraceTyped observability events;trace/textfor human-readable colorized output;trace/jsonfilefor NDJSON file logging;trace.NewLLMfor raw LLM call wrappingtool/fileFile viewing and editing helpers (view,create_filewith optional no-overwrite,str_replace)tool/bashBash command execution with blocklist, allowlist, timeout, and safe-mode guardrailstool/humanHuman-in-the-loop input collection
Examples
Comprehensive examples are included in the examples/ directory:
| Example | Description |
|---|---|
| Simple Agent | Start here! Minimal example showing one agent with one custom tool perfect for learning the basics |
| Conversational Agent | REPL-style chatbot with short-term conversational memory and a simple built-in tool |
| Long-Term Memory | REPL-style chatbot with semantic long-term memory (RAG) backed by Qdrant |
| Handoff | One agent hands work off to a specialist agent at runtime using the built-in handoff mechanism |
| A2A Server | Expose a Phero agent as an A2A-compliant HTTP server for cross-process agent calls |
| A2A Client | Connect to a remote A2A agent and use it as a local tool inside an orchestrator |
| Debate Committee | Multi-agent architecture where committee members debate independently and a judge synthesizes the final decision |
| Multi-Agent Workflow | Classic Plan → Execute → Analyze → Critique pattern with specialized agent roles |
| RAG Chatbot | Terminal chatbot with semantic search over local documents using Qdrant |
| Skill | Discover SKILL.md files and expose them as callable agent tools |
| MCP Integration | Run an MCP server as a subprocess and expose its tools to agents |
| Supervisor Blackboard | Supervisor-worker pattern with a shared blackboard for coordination |
| Tracing | Attach a colorized tracer to an agent and inspect LLM requests, tool calls, and memory events in real time |
Design Philosophy
Phero embraces several core principles:
- Composability over monoliths Each package does one thing well
- Interfaces over implementations Swap LLMs, vector stores, or embeddings easily
- Explicit over implicit No hidden magic; clear control flow
- Tools are first-class Function tools are the primary integration point
- Developer experience matters Clean APIs, helpful tracing, good error messages
Contributing
Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.
License
This project is licensed under the Apache License 2.0. See the LICENSE file for details.
Acknowledgments
Built with ❤️ by Simone Vellei.
Inspired by the collaborative intelligence of ant colonies where independent agents work together toward shared goals, recognizing one another and coordinating through clear protocols.
The ant is not just a mascot. It is the philosophy. 🐜
Links
- Documentation: pkg.go.dev/github.com/henomis/phero
- Issues: github.com/henomis/phero/issues
- Discussions: github.com/henomis/phero/discussions
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi