sagent
Health Uyari
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 7 GitHub stars
Code Gecti
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
This project is a strongly-typed Python library and CLI designed for building and managing multi-agent, multi-provider LLM applications. It allows developers to compose, test, and embed agent workflows directly within their Python code rather than relying solely on standalone CLI tools.
Security Assessment
Overall Risk: Low. Based on a light code scan of 12 files, no dangerous patterns, hardcoded secrets, or malicious code were found. The repository does not request any inherently dangerous system permissions. However, as an AI agent framework, the tool inherently processes the prompts you feed it and may execute agent-defined actions (such as reading files, running glob searches, or using grep) based on its configuration. Users should still exercise standard caution regarding which directories and system access they allow the agent to interact with.
Quality Assessment
The project is actively maintained, with its most recent push occurring today. It benefits from clear, comprehensive documentation covering everything from core concepts and tutorials to security and self-hosting. It uses the standard, permissive Apache-2.0 license, making it safe for commercial and personal use. The only notable drawback is its extremely low community visibility; it currently has only 7 GitHub stars, meaning it has not been extensively battle-tested or reviewed by a wide audience.
Verdict
Use with caution—the code appears safe and well-documented, but its extremely low community adoption means it has not yet been broadly validated by the open-source ecosystem.
Strongly typed Python-first library and CLI for multi-provider, multi-agent coding LLMs with Erlang roots and aspirations.
sagent🪄
Typed Python library and CLI for multi-provider, multi-agent LLM applications.
Tutorial · Concepts · Providers · Tools · CLI · Sessions · Security · Architecture · API · Streaming · Compaction · Slack · Self-hosted · Examples
from sagent import tools
from sagent.agent import Agent
from sagent.lib.json import json_freeze
from sagent.providers import Google
agent = Agent(
model=Google.from_env().model("gemini-3.1-pro-preview"),
system="You are a scientist.",
tools=[tools.Read(), tools.Glob(), tools.Grep()],
)
result = await agent.run(json_freeze({"prompt": "analyze the CSV in ./data/"}))
print(result.content)
Why sagent exists
Most serious coding agents are CLIs, editor extensions, hosted
assistants, or non-Python runtimes. Sagent gives you the agent runtime
as typed Python objects you can import, compose, test, and embed. The
CLI is one entry point over the library, not the center of the design.
Use Sagent when you want:
- a Python API first and a CLI second;
- provider swapping without changing the agent loop;
- custom tools as normal Python objects;
- session persistence and compaction;
- child agents and peer messaging for review, delegation, and map-reduce work.
Three pieces make Sagent distinctive:
- Hot-swappable providers. The same agent, tools, session, and
compactor can run against Anthropic, OpenAI, Google, Moonshot,
DashScope, MiniMax, or an OpenAI-compatible endpoint. - Multi-agent primitives.
AgentSelf,AgentSpawn, andAgentSendlet agents inspect themselves, spawn isolated children,
and send messages to named peers. - Typed runtime objects.
Agent,Tool,Message,Model, andProviderare Python protocols and dataclasses that can be used
directly from application code.
What sagent does
- Runs agents against Anthropic, OpenAI, Google, Moonshot, DashScope, MiniMax, and OpenAI-compatible endpoints.
- Exposes tools for local files, shell commands, web access, paper search, and agent coordination.
- Keeps the same
Agentbehind CLI, Slack, parent agents, and application code. - Represents provider responses, tool calls, tool results, and user messages as typed
Messageobjects. - Lets agents call
AgentSelf,AgentSpawn, andAgentSendas ordinary tools.
Install
pip install sagent
Sagent requires Python 3.12.
Quickstart: CLI
export GOOGLE_API_KEY=...
sagent/bin/cli.py --provider Google --model gemini-3.1-pro-preview
For non-interactive use, pipe a prompt on stdin:
printf 'Say hi in one sentence.' | \
sagent/bin/cli.py --provider Google --model gemini-3.1-pro-preview \
--output-format json
Use --continue to resume the most recent session for this working directory, --session PATH for an explicit session directory, or --no-session-persistence when prompts and auto-memory should not be written to disk. Use --max-budget-usd N to cap API spend for the current run.
Quickstart: Python
import asyncio
from sagent import tools
from sagent.agent import Agent
from sagent.lib.json import json_freeze
from sagent.providers import Anthropic
async def main() -> None:
agent = Agent(
model=Anthropic.from_env().model("claude-sonnet-4-6"),
system="You are a concise coding assistant.",
tools=[tools.Read(), tools.Grep(), tools.Glob()],
)
result = await agent.run(json_freeze({"prompt": "Summarize README.md"}))
print(result.content)
asyncio.run(main())
Agent.run() accepts a JSON directive with a prompt key and returns a Message.
Provider setup
Sagent ships API-key providers for Anthropic, OpenAI, OpenAISubscription, Google, Moonshot, DashScope, MiniMax, and generic OpenAI-compatible endpoints. Set the key for the provider you plan to use:
export ANTHROPIC_API_KEY=...
export OPENAI_API_KEY=...
export GOOGLE_API_KEY=...
export MOONSHOT_API_KEY=...
export DASHSCOPE_API_KEY=...
export MINIMAX_API_KEY=...
| Provider | Environment variable | Example model |
|---|---|---|
Anthropic |
ANTHROPIC_API_KEY |
claude-sonnet-4-6 |
OpenAI |
OPENAI_API_KEY |
gpt-5.5 |
Google |
GOOGLE_API_KEY |
gemini-3.1-pro-preview |
Moonshot |
MOONSHOT_API_KEY |
kimi-k2.6 |
DashScope |
DASHSCOPE_API_KEY |
qwen3.6-plus |
MiniMax |
MINIMAX_API_KEY |
MiniMax-M2.7 |
SelfHosted |
SAGENT_SELFHOSTED_MODEL |
Qwen/Qwen3.6-27B |
See Providers for more detail.
Examples
The examples/ directory contains small, runnable examples:
offline_custom_tool.py: run an agent/tool/model loop without API keys.decorator_tool.py: wrap a function as a tool.custom_tool.py: implement the fullToolprotocol.multi_agent_reviewer.py: spawn an isolated reviewer child.openai_compatible_provider.py: connect an OpenAI-compatible endpoint.
Start with the tutorial, then use the examples as copyable patterns.
Concepts
Sagent has five core contracts: Message, Tool, Model, Provider, and Agent.
Messageis the typed payload that crosses providers, tools, sessions, compaction, and UI surfaces.Toolreceives a JSON directive in a message and returns a message.Modelis the backend request/response interface.Providerowns authentication and constructs models.Agentowns the loop, model, tools, inbox, session, and compactor.
TextMessage is intentionally central: it is the common communication interface across the runtime.
See Concepts and Architecture.
Inbox zero
Most agent frameworks are turn-based: user sends a message, agent
processes it, agent responds, repeat. Sagent instead uses a
drain-run-check loop:
while True:
drain inbox into user messages
call model
if tool calls exist: dispatch tools and loop
if inbox is empty and model is done: go idle
The agent goes idle only when the inbox is empty and the model has
nothing left to do. It wakes when anything lands in the inbox.
Every surface - REPL, Slack, CLI, parent agent, or application code -
puts messages in the same inbox. User input, background task results,
and agent-to-agent messages use the same mechanism instead of separate
plumbing. User messages go to the front; background and peer messages
append at the back.
Context-affecting slash commands follow the same rule. /clear is
queued and interpreted at the agent's single inbox drain point. Surface
local commands that do not mutate model context, such as /model, may
be handled by the REPL before entering the inbox.
Message: typed payloads plus graph edges
Sagent messages use MIME-style descriptors for heterogeneous payloads,
plus ids and parent ids for graph structure. The public Message union
contains TextMessage, BytesMessage, JsonMessage, andMultipartMessage.
MultipartMessage content is recursive: compound messages hold nested
messages. An assistant turn containing text, thinking, and tool calls
uses the same message graph as a single text chunk. Descriptors such astext/plain, multipart/x-tool-call, and application/x-done tell
callers how to interpret the payload.
Tool: one input message, one output message
Tools are normal Python objects with a small protocol:
class Tool(Protocol):
name: str
tool_id: str
description: str
directive_schema: JSON
supports_microcompaction: bool
def summary(self, msg: Message) -> str: ...
def prompt(self) -> str | None: ...
async def run(self, msg: Message) -> Message: ...
Input is a Message with a JSON directive. Output is a Message.
Expected tool failures return descriptor="text/x-error" rather than
raising through the agent loop.
Agent follows the same interface pattern as a tool. AgentSpawn is
a tool that builds a child Agent, runs it, and returns the child's
final output as a tool response. That is what makes recursive agent
composition work without a separate orchestration layer.
AgentSelf, AgentSpawn, AgentSend
AgentSelflets an agent inspect or mutate its own state: update
status, compact context, clear context, change model, inspect
diagnostics, and adjust token limits.AgentSpawncreates child agents with explicit tool/depth limits for
isolated reviews, subtasks, and map-reduce work.AgentSenddelivers a message to another live named agent's inbox.
This makes multi-agent coordination peer-to-peer rather than only
parent-to-child.
Security and privacy
Sagent is an agent runtime, not a sandbox. Enabled tools run with the current
process permissions: Bash executes local commands, file tools read and write
accessible paths, and provider/network tools send data to their configured
services. Sessions are plaintext local state and may contain prompts, model
responses, tool results, file snippets, and paths.
Use narrow tool sets, pass --no-session-persistence for one-off sensitive
prompts so sessions and auto-memory are disabled, and run Sagent inside your own
OS/container sandbox when a task needs hard isolation. See
Security.
Current scope
Sagent does not currently include:
- MCP integration;
- LSP integration;
- native sandboxing;
- a desktop UI;
- a tree-sitter repo map;
- a hosted service;
- browser automation.
Adjacent projects
This comparison focuses on the runtime shape rather than every feature
of each project.
| Sagent | aider | LangChain | OpenClaw | Cline | Claude Code | Codex CLI | Gemini CLI | Flue | Pi | Attractor | |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Python library | ✅ | 🟡 | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Multi-provider | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ |
| Context compaction | ✅ | 🟡 | 🟡 | ❌ | 🟡 | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ |
| User-initiated backend swap | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ |
| Agent-initiated backend swap | ✅ | ❌ | 🟡 | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | 🟡 |
| Agent self-mutation | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | 🟡 | ❌ |
| Context hot-swap | ✅ | 🟡 | 🟡 | 🟡 | 🟡 | ❌ | ❌ | ❌ | 🟡 | ✅ | ✅ |
| Recursive agent spawn | ✅ | ❌ | ✅ | 🟡 | ❌ | 🟡 | 🟡 | ❌ | ✅ | 🟡 | ✅ |
| Multi-agent (fully detached) | ✅ | ❌ | ✅ | ✅ | ❌ | 🟡 | 🟡 | ❌ | ✅ | 🟡 | ✅ |
| GitHub stars (May 2026) | -- | 44.4k | 135.8k | 368.6k | 61.4k | -- | 80.1k | 103.2k | 2.5k | 45.0k | 1.1k |
✅ = yes, 🟡 = partial, ❌ = no. Corrections welcome --
open a PR.
How each project works
aider --
Git-native pair programmer. The LLM emits markdown-formatted edits (14
edit formats) and aider parses them -- there is no structured tool calling.
All providers route through litellm as a single string-addressed transport./model switches the backend mid-session by raising SwitchCoder, which
reconstructs the entire Coder object; conversation history carries over
but the swap is destructive. A tree-sitter repo map with PageRank ranking
provides structural code awareness that Sagent lacks. No multi-agent
capabilities beyond a synchronous Architect-to-Editor handoff. Importable
via Coder.create() but the scripting API is explicitly unsupported and
may change without notice.
LangChain/LangGraph --
Broad Python application framework for LLM pipelines. Multi-provider,
multi-agent (via LangGraph state machines), and fully programmatic. Context
compaction, backend swapping, and agent self-mutation are all possible but
application-defined rather than built-in -- the framework provides building
blocks, not an opinionated agent loop. Sagent is a smaller, more
opinionated runtime with typed protocols, a concrete inbox loop, and
built-in session persistence.
OpenClaw --
Multi-platform personal assistant (desktop, mobile, web) with multi-provider
and multi-agent support. Agents coordinate across channels but the system
is oriented toward end-user assistant workflows rather than developer
tooling. TypeScript-based, not available as a Python library.
Cline --
VS Code extension with multi-provider support. Users can switch models in
the settings panel mid-conversation, but the extension is not importable as
a library. Single-agent with no spawn or coordination primitives. Context
management is truncation-based rather than structured compaction.
Claude Code (Anthropic) --
Closed-source vendor CLI with strong tool-use capabilities and structured
context compaction. Agents can spawn recursive sub-agents and compact their
own context, but cannot switch providers (Anthropic-only) or dynamically
adjust token limits. Not available as a Python library; the SDK is
JavaScript. No user-initiated backend swap since there is only one backend.
Codex CLI (OpenAI) --
Rust-based CLI locked to OpenAI models. Single-agent, single-provider, no
compaction, no programmatic API. Clean local-execution model with
sandboxing, but no extensibility surface for custom tools, provider
swapping, or multi-agent coordination.
Gemini CLI (Google) --
TypeScript CLI locked to Google models. Has context compaction via
summarization. Single-agent, single-provider, no programmatic API, no
custom tool protocol. Designed as a terminal interface for Gemini, not as a
composable runtime.
Flue (Astro) --
TypeScript "agent harness framework," explicitly headless and runtime-agnostic
(Node.js, Cloudflare Workers, GitHub Actions). Agents are TypeScript modules
with triggers (HTTP webhook, CLI). Sandbox is pluggable: a fast in-processjust-bash virtual sandbox by default, or full Linux containers via
Daytona/E2B connectors. session.task() spawns child agents in the same
sandbox; the same primitive is exposed to the LLM, so agents can recursively
delegate. Multi-provider via model strings (anthropic/claude-sonnet-4-6,openrouter/...). No interactive UI, no built-in compaction, no
agent-initiated backend swap -- the developer chooses the model at init()
or per call. Skills, AGENTS.md, and per-call MCP tool injection are
first-class.
Pi (badlogic/pi-mono) --
TypeScript "minimal terminal coding harness." The design point is the
opposite of Sagent's: ship aggressively few defaults and make every layer
extensible (skills, prompt templates, themes, extensions, packages
distributed via npm or git). /model and Ctrl+L swap the backend
mid-session; /tree navigates session history as a branchable tree
(genuine context hot-swap). /reload lets the agent rewrite its own files
and pick up the change in place -- a soft form of self-mutation. Sub-agents,
plan mode, permission gates, sandboxing, and MCP support are all explicitly
not built in; install or build them via extensions. Print/JSON, RPC, and
SDK modes make it embeddable. Star count reflects the whole pi-mono
toolkit (CLI + LLM SDK + UI libraries + Slack bot), not the coding agent
in isolation.
Attractor (StrongDM) --
Specification, not an implementation. A pair of NLSpecs (attractor-spec.md,coding-agent-loop-spec.md, unified-llm-spec.md) you hand to a coding agent
and ask it to build. Attractor proper is a DOT-graph pipeline runner: nodes
are AI tasks, edges encode routing/conditions, the graph IS the workflow.
The spec mandates structured fidelity modes (full/truncate/compact/summary:{low,medium,high}) for cross-stage context handoff, per-node model
selection via a CSS-like stylesheet, parallel/fan-in handlers, human-gate
nodes, and checkpoint/resume. The companion coding-agent-loop spec defines
provider-aligned toolsets (apply_patch for OpenAI, edit_file for Anthropic,
gemini-cli tools for Gemini) and subagent spawn primitives. Categories above
reflect what an Attractor-conformant implementation must support; the actual
runtime shape depends on whoever builds it.
Architecture map
| Module | Role |
|---|---|
bin/cli.py |
Terminal entry point |
bin/slack.py |
Slack Socket Mode entry point |
agent/ |
Turn loop, retry, dispatch, sessions |
compactor.py |
Structured compaction and prompt-too-long retry |
custom_types.py |
Message, Tool, Model, Provider protocols |
providers/ |
Anthropic, OpenAI, Google, Moonshot, DashScope, MiniMax, OpenAI-compatible |
tools/ |
Built-in tools for files, shell, web, search, and agent coordination |
repl/ |
prompt_toolkit REPL and diff rendering |
sessions.py |
Per-cwd session storage |
prompt.py |
System prompt assembly |
Name
sagent (noun, neologism) /ˈseɪ.dʒənt/
From sage + agent.
An AI assistant that confidently performs a task you didn't ask for while ignoring the one you did.
"I asked the sagent to fix one failing test -- it deleted the test and reported all green."
Contributing
See CONTRIBUTING.md for local validation and public contribution flow.
License
Apache License 2.0
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi