go-agent
Health Warn
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 5 GitHub stars
Code Warn
- Code scan incomplete — No supported source files were scanned during light audit
Permissions Pass
- Permissions — No dangerous permissions requested
This is a lightweight, interactive AI agent framework written in Go. It connects to external Large Language Models (LLMs) to answer prompts, featuring built-in file manipulation, bash execution, and a two-layer memory system for persistent conversations.
Security Assessment
Overall Risk: Medium. The tool acts as a bridge to an external LLM via a user-provided API key, which means it inherently makes network requests and handles sensitive credentials (expected to be stored locally in a `.env` file). While there are no hardcoded secrets, its core capabilities include executing arbitrary shell commands (`Bash`), reading, editing, and writing files (`Read_file`, `Edit_file`, `Write_file`). If connected to an untrusted or easily manipulated LLM, the agent could be tricked into executing malicious commands or modifying local files.
Quality Assessment
The project is very new and currently has low visibility, evidenced by only 5 GitHub stars. However, it is actively maintained (last updated today) and uses a standard, permissive MIT license. The automated code scan was incomplete, which means a deeper manual review of the Go source code is recommended before integrating this into critical environments. Community trust is currently limited due to the project's early stage.
Verdict
Use with caution: the framework is actively maintained and licensed, but its ability to execute arbitrary system commands combined with its early-stage development and reliance on external LLMs requires strict oversight in trusted environments only.
A small interactive AI Agent written in Go.
go-agent
A small interactive AI agent written in Go.
Features
- LLM-driven agent core
- Built-in tools:
BashRead_fileEdit_fileWrite_fileMemoryfor long-term memory entries
- Two-layer memory system:
- chat history for the ongoing conversation
- long-term memory for durable facts, preferences, and decisions
- Automatic skill discovery from the
skillsdirectory - Interactive CLI entry point in main.go
Quick Start
1. Requirements
- Go 1.20 or newer is recommended
- An LLM service with a compatible API
LLM_BASE_URLLLM_API_KEYLLM_MODEL
2. Configuration
Create a .env file in the project root. You can start from .env.example:
LLM_BASE_URL=YOUR_LLM_BASE_URL
LLM_API_KEY=YOUR_LLM_API_KEY
LLM_MODEL=YOUR_LLM_MODEL
3. Run Demo
Run the demo CLI from the project root:
go run .
The demo will:
- load
.envif present - restore chat history from
.memory/HISTORY.jsonl - restore long-term memory from
.memory/MEMORY.md - start an interactive prompt
- persist both memory stores as the session continues
Type exit to quit.
Tools
To add your own tool, implement the core.Tool interface and pass it to agent.WithTools().
Required methods:
Name() stringDescription() stringParameters() core.ParametersExecute(input string) (string, error)
Minimal example:
type EchoTool struct{}
func (t *EchoTool) Name() string { return "Echo" }
func (t *EchoTool) Description() string { return "Return the input text." }
func (t *EchoTool) Parameters() core.Parameters {
return core.Parameters{
Type: "object",
Properties: map[string]core.Param{
"text": {Type: "string", Description: "Text to echo back"},
},
Required: []string{"text"},
}
}
func (t *EchoTool) Execute(input string) (string, error) {
params, err := core.ParseParams(input, "text")
if err != nil {
return "", err
}
return params["text"].(string), nil
}
Register it like this:
agent.WithTools(
&tools.BashTool{},
&tools.ReadFileTool{},
&EchoTool{},
)
Memory
To add your own memory backend, implement core.ChatMemoryStore and core.LongMemoryStore, then pass them to core.NewMemory().
The chat store requires these methods:
Get(int) (core.ChatMessage, error)Append(core.ChatMessage) errorUpdate(int, core.ChatMessage) errorReplace(int, int, []core.ChatMessage) errorDelete(int) errorList() ([]core.ChatMessage, error)Count() (int, error)Clear() errorClone() core.ChatMemoryStore
The long memory store uses the same pattern for strings:
Get(int) (string, error)Append(string) errorUpdate(int, string) errorReplace(int, int, []string) errorDelete(int) errorList() ([]string, error)Count() (int, error)Clear() errorClone() core.LongMemoryStore
Example:
memory := core.NewMemory(
storage.NewFileChatStore(".memory/HISTORY.jsonl"),
storage.NewFileLongStore(".memory/MEMORY.md"),
)
The built-in Memory tool manages long-term memory entries with create, update, and delete operations.
Skills
Skills are registered through agent.WithSkills(). Pass one or more skill directories, and the agent will discover skills automatically.
Example:
agent.WithSkills("skills")
Each skill is a folder that contains a SKILL.md file, such as skills/weather/SKILL.md.
The skill loader scans the provided directories, reads each SKILL.md frontmatter, and includes that metadata in the system prompt.
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found