AgenticGoKit

mcp
Guvenlik Denetimi
Gecti
Health Gecti
  • License — License: Apache-2.0
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 128 GitHub stars
Code Gecti
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
This open-source framework written in Go helps developers build, orchestrate, and deploy multi-agent AI systems. It provides the necessary tools for complex AI workflows, including real-time streaming, memory management, and tool discovery via the MCP protocol.

Security Assessment
The overall risk is rated as Low. The automated code scan of 12 files found no dangerous patterns, hardcoded secrets, or requests for excessive permissions. As a framework designed to orchestrate AI agents and communicate with Large Language Models, it inherently requires network access to interact with external AI providers (such as OpenAI, Anthropic, and Azure). However, no dangerous shell executions or unauthorized local data access were detected in the audit.

Quality Assessment
The project demonstrates strong quality and active maintenance. It is licensed under the permissive and standard Apache-2.0 license. The repository is highly active, with its latest code push occurring just today, and has garnered 128 GitHub stars, indicating a healthy and growing level of community trust. It is important to note that the tool is currently in a "beta" phase. While the developers state that the core APIs are stable and ready for new projects, the legacy code packages are scheduled to be deprecated and removed in the future.

Verdict
Safe to use, though developers should be prepared for standard API migrations when the project officially graduates from its current beta status to version 1.0.
SUMMARY

Open-source Agentic AI framework in Go for building, orchestrating, and deploying intelligent agents. LLM-agnostic, event-driven, with multi-agent workflows, MCP tool discovery, and production-grade observability.

README.md

AgenticGoKit

🚀 BETA RELEASE - The v1beta API is now stable and recommended for all new projects. While still in beta, the core APIs are working well and ready for testing. We continue to refine features and welcome feedback and contributions!

📋 API Versioning Plan:

  • Current (v0.x): v1beta package is the recommended API (formerly vnext)
  • v1.0 Release: v1beta will become the primary v1 package
  • Legacy APIs: Both core and core/vnext packages will be removed in v1.0

Robust Go framework for building intelligent multi-agent AI systems

Go Version
License
Go Report Card
Build Status
Documentation

The most productive way to build AI agents in Go. AgenticGoKit provides a unified, streaming-first API for creating intelligent agents with built-in workflow orchestration, tool integration, and memory management. Start with simple single agents and scale to complex multi-agent workflows.

Why Choose AgenticGoKit?

  • v1beta APIs: Modern, streaming-first agent interface with comprehensive error handling
  • Multimodal Support: Native support for images, audio, and video inputs alongside text
  • Real-time Streaming: Watch your agents think and respond in real-time
  • Multi-Agent Workflows: Sequential, parallel, DAG, and loop orchestration patterns
  • Production-Ready Observability: Built-in distributed tracing with OpenTelemetry support
  • Multiple LLM Providers: Seamlessly switch between OpenAI, Anthropic, Ollama, Azure OpenAI, Azure AI Foundry Local, HuggingFace, OpenRouter, vLLM, BentoML, MLFlow, and more
  • High Performance: Compiled Go binaries with minimal overhead
  • Batteries Included: Built-in memory and RAG by default (zero config needed, swappable with pgvector/custom)
  • Rich Integrations: Memory providers, tool discovery, MCP protocol support
  • Active Development: Beta status with stable core APIs and ongoing improvements

Part of the AgenticGoKit Ecosystem

AgenticGoKit is one part of a three-part ecosystem designed to take you from idea to production:

Part Repository Purpose
Core Framework AgenticGoKit Build multi-agent systems with memory, RAG, tools, and orchestration.
Developer Tooling AGK CLI Scaffold, trace, evaluate, and manage agent workflows from the command line.
Template Registry agk-templates Official templates powering AGK scaffolds and integrations.

Typical flow: Design with the core framework → Scaffold with AGK using official templates → Build & ship with your preferred deployment stack.

📦 Explore the full ecosystem at the AgenticGoKit Organization.


Quick Start

Start building immediately with the modern v1beta API:

package main

import (
    "context"
    "fmt"
    "log"
    "time"
    
    "github.com/agenticgokit/agenticgokit/v1beta"
)

func main() {
    // Create a chat agent with Ollama
    agent, err := v1beta.NewBuilder("ChatAgent").
        WithConfig(&v1beta.Config{
            Name:         "ChatAgent",
            SystemPrompt: "You are a helpful assistant",
            LLM: v1beta.LLMConfig{
                Provider: "ollama",
                Model:    "gemma3:1b",
                BaseURL:  "http://localhost:11434",
            },
        }).
        Build()
    if err != nil {
        log.Fatal(err)
    }

    // Basic execution
    result, err := agent.Run(context.Background(), "Explain Go channels in 50 words")
    if err != nil {
        log.Fatal(err)
    }
    
    fmt.Println("Response:", result.Content)
}

Enable observability with a single environment variable:

export AGK_TRACE=true  # Automatic tracing to .agk/runs/<run-id>/trace.jsonl

Note: CLI tooling for AgenticGoKit is provided by the agk package. Install with: go install github.com/agenticgokit/agk@latest

Core Capabilities

AgenticGoKit handles the complexities of building AI systems so you can focus on logic.

Workflow Orchestration

Orchestrate multiple agents using robust patterns. Pass data between agents, handle errors, and manage state automatically.

Real-time Streaming

Built from the ground up for streaming. Receive tokens and tool updates as they happen, suitable for real-time UI experiences.

Memory & RAG

Batteries Included: Agents come with valid memory out-of-the-box (chromem embedded vector DB).

  • Features: Chat history preservation, semantic search, and document ingestion.
  • Configurable: Swap the default with pgvector or custom providers easily.

Multimodal Input

Native support for Images, Audio, and Video inputs. Works seamlessly with models like GPT-4 Vision, Gemini Pro Vision, etc.

Tool Integration & MCP

Extend agents with tools using standard Go functions or the Model Context Protocol (MCP).

  • Dynamic Discovery: Automatically find and register tools from MCP servers.
  • Standardized: Support for the emerging standard for LLM tool interoperability.

Observability & Tracing

Production-Ready: Built-in distributed tracing with zero configuration required.

  • Features: OpenTelemetry integration, workflow trace hierarchies, OTLP/Jaeger support.
  • Exporters: Console, file, and OTLP for complete visibility into agent execution.
  • Example: Observability Basics

Supported LLM Providers

AgenticGoKit works with all major LLM providers out of the box:

Provider Plugin Import Model Examples Use Case
OpenAI plugins/llm/openai GPT-4o, GPT-4 Vision, GPT-3.5-turbo Production-grade conversational and multimodal AI
Anthropic plugins/llm/anthropic Claude 3.5 Sonnet, Claude 3 Opus, Haiku Advanced reasoning and long-context tasks
Azure OpenAI plugins/llm/azureopenai GPT-4, GPT-3.5-turbo Enterprise deployments with Azure
Azure AI Foundry Local plugins/llm/foundrylocal Phi-3.5 Mini, Qwen 2.5, Mistral, Llama On-device / local inference via Foundry Local (no auth required)
Ollama plugins/llm/ollama Llama 3, Gemma, Mistral, Phi Local development and privacy-focused apps
HuggingFace plugins/llm/huggingface Llama-2, Mistral, Falcon Open-source model experimentation via HF Inference API
OpenRouter plugins/llm/openrouter GPT-4, Claude, Llama, Mixtral Access to 100+ models via a single API key
BentoML plugins/llm/bentoml Any model packaged as Bento Self-hosted ML models with production features
MLFlow plugins/llm/mlflow Models via MLFlow AI Gateway ML model deployment and management
vLLM plugins/llm/vllm Llama-2, Mistral, Qwen, etc. High-throughput LLM serving with PagedAttention
Custom Any OpenAI-compatible API Bring your own provider

Learning Resources

Documentation

Examples

API Versioning & Roadmap

Current Status (v0.x - Beta)

  • Recommended: Use v1beta package for all new projects
  • Import Path: github.com/agenticgokit/agenticgokit/v1beta
  • Stability: Beta - Core APIs are stable and functional, suitable for testing and development
  • Status: Beta - Core APIs are stable; v1beta is the evolution of the former core/vnext package
  • Note: core package is legacy and will be removed in v1.0

v1.0 Release Plan

What's Changing:

  • v1beta package will become the primary v1 API
  • Legacy core and core/vnext packages will be removed entirely
  • Clean, stable API with semantic versioning guarantees

Migration Path:

  • If you're using v1beta or vnext: Minimal changes (import path update only)
  • If you're using core: Migrate to v1beta now to prepare
  • core/vnext users: vnext has been renamed to v1beta - update imports

Timeline:

  • v0.x (Current): v1beta stabilization and testing
  • v1.0 (Planned): v1betav1, remove core package

Why v1beta Now?

The v1beta package represents our next-generation API design:

  • ✅ Streaming-first architecture
  • ✅ Unified builder pattern
  • ✅ Better error handling
  • ✅ Workflow composition
  • ✅ Stable core APIs (beta status)
  • ⚠️ Minor changes possible before v1.0

By using v1beta today, you're getting access to the latest features and helping shape the v1.0 release with your feedback.

Resources

Contributing

We welcome contributions! See docs/contributors/ContributorGuide.md for getting started.

License

Apache 2.0 - see LICENSE for details.

Yorumlar (0)

Sonuc bulunamadi