adk-go-ollama

mcp
Security Audit
Warn
Health Warn
  • License — License: Apache-2.0
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 5 GitHub stars
Code Pass
  • Code scan — Scanned 7 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This tool provides an Ollama model provider implementation for adk-go, enabling developers to run AI agents on local language models like Llama 3 and Mistral using standard ADK APIs.

Security Assessment
The overall risk is Low. The codebase scan across 7 files found no dangerous patterns, hardcoded secrets, or requests for overly broad permissions. As an integration wrapper, its primary function is to make network requests to an Ollama instance (either running locally or over a network). It does not execute arbitrary shell commands or access sensitive system data outside of standard API communications.

Quality Assessment
The project is actively maintained, with its most recent push happening today. It benefits from strong development hygiene, utilizing pre-commit hooks, a linter, and conventional commit enforcement, which indicates professional-grade care. It is licensed under the permissive Apache-2.0. However, community trust and visibility are currently very low. With only 5 GitHub stars, the tool has not yet been widely tested or adopted by the broader developer community.

Verdict
Safe to use, though you should expect minimal community support due to its low visibility.
SUMMARY

Ollama model provider for adk-go. For a AWS Bedrock equivalent, check out https://github.com/craigh33/adk-go-bedrock!

README.md

adk-go-ollama banner showing Agent Development Kit connected to Ollama

adk-go-ollama

Ollama implementation of the model.LLM interface for adk-go, so you can run agents on local models like Llama 3, Mistral, and others with the same ADK APIs you use for Gemini.

Requirements

  • Go 1.25+ (aligned with google.golang.org/adk)
  • An instance of Ollama running locally or accessible over the network
  • golangci-lint if you run make lint (uses .golangci.yaml)

Install

go get github.com/craigh33/adk-go-ollama

Replace the module path with your fork or published path if you rename the module in go.mod.

Makefile

Target Description
make test Run unit tests
make build Compile all packages
make lint Run golangci-lint run ./...
make pre-commit-install Install pre-commit hooks

Contributing / Development

Pre-commit hooks

This project uses pre-commit to enforce code quality and commit hygiene. The following tools must be available on your PATH before installing the hooks:

Tool Purpose Install
pre-commit Hook framework brew install pre-commit
golangci-lint Go linter (runs make lint) brew install golangci-lint
gitleaks Secret / credential scanner brew install gitleaks

Once the tools are installed, wire the hooks into your local clone:

make pre-commit-install

This installs hooks for both the pre-commit stage and the commit-msg stage.

What the hooks do

Hook Stage Description
trailing-whitespace pre-commit Strips trailing whitespace
end-of-file-fixer pre-commit Ensures files end with a newline
check-yaml pre-commit Validates YAML syntax
no-commit-to-branch pre-commit Prevents direct commits to main
conventional-pre-commit commit-msg Enforces Conventional Commits message format (feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert)
golangci-lint pre-commit Runs make lint against all Go files
gitleaks pre-commit Scans staged diff for secrets/credentials

Usage

ctx := context.Background()
m, err := ollama.New("gemma3")
if err != nil {
    log.Fatal(err)
}

agent, err := llmagent.New(llmagent.Config{
    Name:  "assistant",
    Model: m,
    Instruction: "You are helpful.",
})
// Wire agent into runner.New(...) as usual.

ollama.New accepts a model name as recognized by your Ollama instance. LLMRequest.Model can override the model name at runtime.

The internal/mappers package holds genai ↔ Ollama conversions (requests, responses, tools, usage). It is internal to this module and used by the ollama package.

Examples

Each example has its own README.md and Makefile:

To run the examples, you can override the default model using the OLLAMA_MODEL environment variable.

make -C examples/ollama-chat run

Run tool calling example:

make -C examples/ollama-tool-calling run

Run streaming example:

make -C examples/ollama-stream run

How it maps to Ollama

  • Messages: genai roles user and model map to Ollama user and assistant. System instructions are sent as system messages.
  • Tools: the mapper converts GenerateContentConfig.Tools entries (specifically FunctionDeclarations).
  • Streaming: When ADK uses SSE streaming, the provider streams the response and yields partial outputs, buffering until the final usage metadata is returned.

Limitations

  • Unsupported features: Tool variants not supported by Ollama cause a request-time error. Advanced features that don't map cleanly may be ignored or return an explicit ADK error.
  • Multimodal Content: Ollama natively only supports base64-encoded images. While the ADK can handle arbitrary blobs natively (like PDFs, Documents, or Spreadsheets), the adk-go-ollama provider will only process genai.InlineData as images via a vision model. Arbitrary file attachments are ignored or rejected by the Ollama API.
  • Image Generation: Ollama's image generation feature currently only supports macOS and x/flux2-klein:4b (based on Flux architectures) and requires a large memory footprint (~12GB). It only supports single image generation requests at a fixed step count natively via /v1/images/generations.

License

Apache 2.0 — see LICENSE.

Reviews (0)

No results found