trpc-agent-go
trpc-agent-go is a powerful Go framework for building intelligent agent systems using large language models (LLMs) and tools.
English | 中文
tRPC-Agent-Go
A powerful Go framework for building intelligent agent systems that transforms how you create AI applications. Build autonomous agents that think, remember, collaborate, and act with unprecedented ease.
Why tRPC-Agent-Go?
- Intelligent Reasoning: Advanced hierarchical planners and multi-agent orchestration
- Rich Tool Ecosystem: Seamless integration with external APIs, databases, and services
- Persistent Memory: Long-term state management and contextual awareness
- Multi-Agent Collaboration: Chain, parallel, and graph-based agent workflows
- GraphAgent: Type-safe graph workflows with multi-conditional routing, functionally equivalent to LangGraph for Go
- Agent Skills: Reusable
SKILL.mdworkflows with safe execution - Artifacts: Versioned storage for files produced by agents and tools
- Prompt Caching: Automatic cost optimization with 90% savings on cached content
- Evaluation & Benchmarks: Eval sets + metrics to measure quality over time
- UI & Server Integration: AG-UI (Agent-User Interaction),
and Agent-to-Agent (A2A) interoperability - Production Ready: Built-in telemetry, tracing, and enterprise-grade reliability
- High Performance: Optimized for scalability and low latency
Use Cases
Perfect for building:
- Customer Support Bots - Intelligent agents that understand context and solve complex queries
- Data Analysis Assistants - Agents that query databases, generate reports, and provide insights
- DevOps Automation - Smart deployment, monitoring, and incident response systems
- Business Process Automation - Multi-step workflows with human-in-the-loop capabilities
- Research & Knowledge Management - RAG-powered agents for document analysis and Q&A
Key Features
Multi-Agent Orchestration
|
Advanced Memory System
|
Rich Tool Integration
|
Production Observability
|
Agent Skills
If you wire Skills through |
Evaluation & Benchmarks
|
Table of Contents
- tRPC-Agent-Go
Documentation
Ready to dive into tRPC-Agent-Go? Our documentation covers everything from basic concepts to advanced techniques, helping you build powerful AI applications with confidence. Whether you're new to AI agents or an experienced developer, you'll find detailed guides, practical examples, and best practices to accelerate your development journey.
Quick Start
See it in Action: [Demo GIF placeholder - showing agent reasoning and tool usage]
Prerequisites
- Go 1.21 or later
- LLM provider API key (OpenAI, DeepSeek, etc.)
- 5 minutes to build your first intelligent agent
Run the Example
Get started in 3 simple steps:
# 1. Clone and setup
git clone https://github.com/trpc-group/trpc-agent-go.git
cd trpc-agent-go
# 2. Configure your LLM
export OPENAI_API_KEY="your-api-key-here"
export OPENAI_BASE_URL="your-base-url-here" # Optional
# 3. Run your first agent!
cd examples/runner
go run . -model="gpt-4o-mini" -streaming=true
What you'll see:
- Interactive chat with your AI agent
- Real-time streaming responses
- Tool usage (calculator + time tools)
- Multi-turn conversations with memory
Try asking: "What's the current time? Then calculate 15 * 23 + 100"
Basic Usage
package main
import (
"context"
"fmt"
"log"
"trpc.group/trpc-go/trpc-agent-go/agent/llmagent"
"trpc.group/trpc-go/trpc-agent-go/model"
"trpc.group/trpc-go/trpc-agent-go/model/openai"
"trpc.group/trpc-go/trpc-agent-go/runner"
"trpc.group/trpc-go/trpc-agent-go/tool"
"trpc.group/trpc-go/trpc-agent-go/tool/function"
)
func main() {
// Create model.
modelInstance := openai.New("deepseek-chat",
openai.WithVariant(openai.VariantDeepSeek),
)
// Create tool.
calculatorTool := function.NewFunctionTool(
calculator,
function.WithName("calculator"),
function.WithDescription("Execute addition, subtraction, multiplication, and division. "+
"Parameters: a, b are numeric values, op takes values add/sub/mul/div; "+
"returns result as the calculation result."),
)
// Enable streaming output.
genConfig := model.GenerationConfig{
Stream: true,
}
// Create Agent.
agent := llmagent.New("assistant",
llmagent.WithModel(modelInstance),
llmagent.WithTools([]tool.Tool{calculatorTool}),
llmagent.WithGenerationConfig(genConfig),
)
// Create Runner.
runner := runner.NewRunner("calculator-app", agent)
// Execute conversation.
ctx := context.Background()
events, err := runner.Run(ctx,
"user-001",
"session-001",
model.NewUserMessage("Calculate what 2+3 equals"),
)
if err != nil {
log.Fatal(err)
}
// Process event stream.
for event := range events {
if event.Object == "chat.completion.chunk" {
fmt.Print(event.Response.Choices[0].Delta.Content)
}
}
fmt.Println()
}
func calculator(ctx context.Context, req calculatorReq) (calculatorRsp, error) {
var result float64
switch req.Op {
case "add", "+":
result = req.A + req.B
case "sub", "-":
result = req.A - req.B
case "mul", "*":
result = req.A * req.B
case "div", "/":
result = req.A / req.B
default:
return calculatorRsp{}, fmt.Errorf("invalid operation: %s", req.Op)
}
return calculatorRsp{Result: result}, nil
}
type calculatorReq struct {
A float64 `json:"A" jsonschema:"description=First integer operand,required"`
B float64 `json:"B" jsonschema:"description=Second integer operand,required"`
Op string `json:"Op" jsonschema:"description=Operation type,enum=add,enum=sub,enum=mul,enum=div,required"`
}
type calculatorRsp struct {
Result float64 `json:"result"`
}
Dynamic Agent per Request
Sometimes your Agent must be created per request (for example: different
prompt, model, tools, sandbox instance). In that case, you can let Runner build
a fresh Agent for every Run(...):
r := runner.NewRunnerWithAgentFactory(
"my-app",
"assistant",
func(ctx context.Context, ro agent.RunOptions) (agent.Agent, error) {
// Use ro to build an Agent for this request.
a := llmagent.New("assistant",
llmagent.WithInstruction(ro.Instruction),
)
return a, nil
},
)
events, err := r.Run(ctx,
"user-001",
"session-001",
model.NewUserMessage("Hello"),
agent.WithInstruction("You are a helpful assistant."),
)
_ = events
_ = err
Stop / Cancel a Run
If you want to interrupt a running agent, cancel the context you passed toRunner.Run (recommended). This stops model calls and tool calls safely and
lets the runner clean up.
Important: do not just “break” your event loop and walk away — the agent
goroutine may keep running and can block on channel writes. Always cancel, then
keep draining the event channel until it is closed.
Option A: Ctrl+C (terminal programs)
Convert Ctrl+C into context cancellation:
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt)
defer stop()
events, err := r.Run(ctx, userID, sessionID, message)
if err != nil {
return err
}
for range events {
// Drain until the runner stops (ctx canceled or run completed).
}
Option B: Cancel from your code
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
events, err := r.Run(ctx, userID, sessionID, message)
if err != nil {
return err
}
go func() {
time.Sleep(2 * time.Second)
cancel()
}()
for range events {
// Keep draining until the channel is closed.
}
Option C: Cancel by requestID (for servers / background runs)
requestID := "req-123"
events, err := r.Run(ctx, userID, sessionID, message,
agent.WithRequestID(requestID),
)
mr := r.(runner.ManagedRunner)
_ = mr.Cancel(requestID)
For more details (including detached cancellation, resume, and server cancel
routes), see docs/mkdocs/en/runner.md and docs/mkdocs/en/agui.md.
Examples
The examples directory contains runnable demos covering every major feature.
1. Tool Usage
- examples/agenttool – Wrap agents as callable tools.
- examples/multitools – Multiple tools orchestration.
- examples/duckduckgo – Web search tool integration.
- examples/filetoolset – File operations as tools.
- examples/fileinput – Provide files as inputs.
- examples/agenttool shows streaming and non-streaming
patterns.
2. LLM-Only Agent
Example: examples/llmagent
- Wrap any chat-completion model as an
LLMAgent. - Configure system instructions, temperature, max tokens, etc.
- Receive incremental
event.Eventupdates while the model streams.
3. Multi-Agent Runners
Example: examples/multiagent
- ChainAgent – linear pipeline of sub-agents.
- ParallelAgent – run sub-agents concurrently and merge results.
- CycleAgent – iterate until a termination condition is met.
4. Graph Agent
Example: examples/graph
GraphAgent – demonstrates building and executing complex, conditional
workflows using thegraphandagent/graphpackages. It shows
how to construct a graph-based agent, manage state safely, implement
conditional routing, and orchestrate execution with the Runner.Multi-conditional fan-out routing:
// Return multiple branch keys and run targets in parallel.
sg := graph.NewStateGraph(schema)
sg.AddNode("router", func(ctx context.Context, s graph.State) (any, error) {
return nil, nil
})
sg.AddNode("A", func(ctx context.Context, s graph.State) (any, error) {
return graph.State{"a": 1}, nil
})
sg.AddNode("B", func(ctx context.Context, s graph.State) (any, error) {
return graph.State{"b": 1}, nil
})
sg.SetEntryPoint("router")
sg.AddMultiConditionalEdges(
"router",
func(ctx context.Context, s graph.State) ([]string, error) {
return []string{"goA", "goB"}, nil
},
map[string]string{"goA": "A", "goB": "B"}, // Path map or ends map
)
sg.SetFinishPoint("A").SetFinishPoint("B")
5. Memory
Example: examples/memory
- In‑memory and Redis memory services with CRUD, search and tool integration.
- How to configure, call tools and customize prompts.
6. Knowledge
Example: examples/knowledge
- Basic RAG example: load sources, embed to a vector store, and search.
- How to use conversation context and tune loading/concurrency options.
7. Telemetry & Tracing
Example: examples/telemetry
- OpenTelemetry hooks across model, tool and runner layers.
- Export traces to OTLP endpoint for real-time analysis.
8. MCP Integration
Example: examples/mcptool
- Wrapper utilities around trpc-mcp-go, an implementation of the
Model Context Protocol (MCP). - Provides structured prompts, tool calls, resource and session messages that
follow the MCP specification. - Enables dynamic tool execution and context-rich interactions between agents
and LLMs.
9. AG-UI Demo
Example: examples/agui
- Exposes a Runner through the AG-UI (Agent-User Interaction) protocol.
- Built-in Server-Sent Events (SSE) server, plus client samples (for example,
CopilotKit and TDesign Chat).
10. Evaluation
Example: examples/evaluation
- Evaluate an agent with repeatable eval sets and pluggable metrics.
- Includes local file-backed runs and in-memory runs.
11. Agent Skills
Examples: examples/skillrun,
examples/skillfind
- Skills are folders with a
SKILL.mdspec + optional docs/scripts. - Built-in tools:
skill_load,skill_list_docs,skill_select_docs,skill_run, and (when the executor supports interactive sessions)skill_exec,skill_write_stdin,skill_poll_session,skill_kill_session. skill_runis the default one-shot command runner in an isolated
workspace.skill_execand the session tools cover interactive stdin/TTY flows
without inlining full scripts into the prompt. They are registered
only when the code executor exposesInteractiveProgramRunner
(or falls back to a local engine that does).skill.NewFSRepository(...)can scan multiple roots, such as a shared
skills directory plus a user-private directory. Use(*skill.FSRepository).Refresh()after skill installation or removal
in long-lived processes.- Prefer using
skill_runonly for commands required by the selected skill
docs, not for generic shell exploration. - When
LLMAgentusesWithCodeExecutor(...)only to supportskill_run,
disable the response code execution processor withllmagent.WithEnableCodeExecutionResponseProcessor(false). The
skill-focused examples (examples/skill,examples/skillrun,examples/skilldynamicschema, andexamples/structuredoutputskills) follow this pattern so fenced code
blocks embedded in assistant text do not auto-execute. examples/skillfinddemonstrates a real end-to-end discovery flow:
the model uses a built-inskill-findskill to search the public web,
install a public GitHub skill into a user-private directory, refresh
the repository, and use the new skill in the same conversation.
Local execution stays off by default and can be enabled explicitly
when you want to run an installed skill.
12. Artifacts
Example: examples/artifact
- Save and retrieve versioned files (images, text, reports) produced by tools.
- Supports multiple backends (in-memory, S3, COS).
13. A2A Interop
Example: examples/a2aadk
- Agent-to-Agent (A2A) interop with an ADK Python A2A server.
- Demonstrates streaming, tool calls, and code execution across runtimes.
14. Gateway Server
Example: openclaw
- A minimal OpenClaw-like gateway server.
- Stable session ids and per-session serialization.
- Basic safety controls: allowlist + mention gating.
- OpenClaw-like implementation (Telegram + gateway): openclaw
Other notable examples:
- examples/humaninloop – Human in the loop.
- examples/codeexecution – Secure code execution.
See individual README.md files in each example folder for usage details.
Architecture Overview
Architecture
Execution Flow
- Runner orchestrates the entire execution pipeline with session management
- Agent processes requests using multiple specialized components
- Planner determines the optimal strategy and tool selection
- Tools execute specific tasks (API calls, calculations, web searches)
- Memory maintains context and learns from interactions
- Knowledge provides RAG capabilities for document understanding
Key packages:
| Package | Responsibility |
|---|---|
agent |
Core execution unit, responsible for processing user input and generating responses. |
runner |
Agent executor, responsible for managing execution flow and connecting Session/Memory Service capabilities. |
model |
Supports multiple LLM models (OpenAI, DeepSeek, etc.). |
tool |
Provides various tool capabilities (Function, MCP, DuckDuckGo, etc.). |
session |
Manages user session state and events. |
memory |
Records user long-term memory and personalized information. |
knowledge |
Implements RAG knowledge retrieval capabilities. |
planner |
Provides Agent planning and reasoning capabilities. |
artifact |
Stores and retrieves versioned files produced by agents and tools (images, reports, etc.). |
skill |
Loads and executes reusable Agent Skills defined by SKILL.md. |
event |
Defines event types and streaming payloads used across Runner and servers. |
evaluation |
Evaluates agents on eval sets using pluggable metrics and stores results. |
server |
Exposes HTTP servers (Gateway, AG-UI, A2A) for integration and UIs. |
telemetry |
OpenTelemetry tracing and metrics instrumentation. |
Using Built-in Agents
For most applications you do not need to implement the agent.Agent
interface yourself. The framework already ships with several ready-to-use
agents that you can compose like Lego bricks:
| Agent | Purpose |
|---|---|
LLMAgent |
Wraps an LLM chat-completion model as an agent. |
ChainAgent |
Executes sub-agents sequentially. |
ParallelAgent |
Executes sub-agents concurrently and merges output. |
CycleAgent |
Loops over a planner + executor until stop signal. |
Multi-Agent Collaboration Example
// 1. Create a base LLM agent.
base := llmagent.New(
"assistant",
llmagent.WithModel(openai.New("gpt-4o-mini")),
)
// 2. Create a second LLM agent with a different instruction.
translator := llmagent.New(
"translator",
llmagent.WithInstruction("Translate everything to French"),
llmagent.WithModel(openai.New("gpt-3.5-turbo")),
)
// 3. Combine them in a chain.
pipeline := chainagent.New(
"pipeline",
chainagent.WithSubAgents([]agent.Agent{base, translator}),
)
// 4. Run through the runner for sessions & telemetry.
run := runner.NewRunner("demo-app", pipeline)
events, _ := run.Run(ctx, "user-1", "sess-1",
model.NewUserMessage("Hello!"))
for ev := range events { /* ... */ }
The composition API lets you nest chains, cycles, or parallels to build complex
workflows without low-level plumbing.
Contributing
We love contributions! Join our growing community of developers building the future of AI agents.
Ways to Contribute
- Report bugs or suggest features via Issues
- Improve documentation - help others learn faster
- Submit PRs - bug fixes, new features, or examples
- Share your use cases - inspire others with your agent applications
Quick Contribution Setup
# Fork & clone the repo
git clone https://github.com/YOUR_USERNAME/trpc-agent-go.git
cd trpc-agent-go
# Run tests to ensure everything works
go test ./...
go vet ./...
# Make your changes and submit a PR!
Please read CONTRIBUTING.md for detailed guidelines and coding standards.
Acknowledgements
Enterprise Validation
Special thanks to Tencent's business units including Tencent Yuanbao, Tencent Video, Tencent News, IMA, and QQ Music for their invaluable support and real-world validation. Production usage drives framework excellence!
Open Source Inspiration
Inspired by amazing frameworks like ADK, Agno, CrewAI, AutoGen, and many others. Standing on the shoulders of giants!
Star History
License
Licensed under the Apache 2.0 License - see LICENSE file for details.
Star us on GitHub • Report Issues • Join Discussions
Built with love by the tRPC-Agent-Go team
Empowering developers to build the next generation of intelligent applications
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi