opentrace

mcp
Guvenlik Denetimi
Gecti
Health Gecti
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 15 GitHub stars
Code Gecti
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
This self-hosted observability server connects your AI coding assistant directly to production environments via MCP. It allows the AI agent to query logs, investigate errors, and analyze database performance without requiring a traditional user interface.

Security Assessment
The tool inherently handles sensitive data because its core function is to connect to your production databases and ingest application logs. It relies on HTTPS to secure the communication channel between the server and the AI client. The automated code scan found no hardcoded secrets or dangerous execution patterns, such as running arbitrary shell commands. Because the tool accesses live production infrastructure, the overall risk is rated as Medium. Proper configuration, especially enforcing read-only database permissions and strict network controls, is essential.

Quality Assessment
The project is actively maintained, with recent updates pushed as recently as today. It is fully licensed under the permissive MIT license. Community trust is currently in its early stages, represented by a modest 15 GitHub stars. The codebase is compact and cleanly passed a light security audit across its 12 files.

Verdict
Use with caution — the code itself appears safe, but its deep access to production data demands strict network and permission controls.
SUMMARY

Observability engine for AI coding agents. Self-hosted, MCP-native, no UI — your AI assistant sees production.

README.md

OpenTrace
Give your AI coding agent eyes into production

Release CI License

How It Works · Get Started · Features · MCP Tools · Security


Your AI coding assistant can read your code — but it's blind to production. When something breaks, you copy-paste stack traces, dig through log dashboards, and manually feed context into chat.

OpenTrace fixes that. It's a self-hosted server that connects your AI agent directly to your production observability data via MCP. No dashboards, no UI — the agent is the interface.

You:    "Why is the checkout endpoint throwing 500s?"

Agent:  Error rate on POST /api/checkout spiked from 0.1% to 4.2%
        starting 8 minutes ago. 23 errors, 12 affected users.

        Root cause: NilPointerError at payments_controller.rb:87
        — customer.address is nil when charging.

        This started after the latest deploy (commit a1b2c3d by
        [email protected]). The commit modified payments_controller.rb
        lines 82-95.

        Want me to fix it?

How It Works

Your Server                              Your Laptop
┌──────────────────────┐                ┌──────────────────────┐
│                      │                │                      │
│  OpenTrace Server    │◄─── MCP ─────│  Claude Code / Cursor │
│                      │   over HTTPS  │                      │
│  Single Go binary    │                │  Reads .mcp.json     │
│  SQLite database     │                │  Auto-connects       │
│                      │                │                      │
└──────┬───────┬───────┘                └──────────────────────┘
       │       │
       │       │
       │       └──── Connects to your Postgres (read-only)
       │
       └──── Receives logs from your app via SDK

The server ingests logs from your app, connects to your databases, monitors health checks, tracks errors, and runs alert rules.

The agent queries all of this through MCP tools — searching logs, investigating errors, explaining slow queries, assessing deploy risk — without you copy-pasting anything.

The developer never opens a dashboard. They ask questions in natural language and the agent has the answers.


Get Started

1. Deploy the server

Pick one:

VPS (Hetzner, DigitalOcean, any Linux server)
ssh root@your-server
curl -fsSL https://get.opentrace.dev | bash

The installer:

  • Downloads the latest binary
  • Initializes the database
  • Sets up a systemd service
  • Optionally installs Caddy for automatic HTTPS
  • Prints the connect command when done
Docker
docker run -d --name opentrace \
  -p 8080:8080 \
  -v opentrace-data:/data \
  -e OPENTRACE_LISTEN_ADDR=0.0.0.0:8080 \
  ghcr.io/adham90/opentrace:latest
Docker Compose
docker compose -f docker-compose.prod.yml up -d
One-click platforms
Platform
Railway Deploy on Railway
Render Deploy to Render
DigitalOcean Deploy to DO

2. Connect your project

In your project directory, run the connect command the installer printed:

curl -s https://your-server.com/connect | bash
  OpenTrace — connect your project
  Server: https://your-server.com

  Checking server... ok

  No accounts exist yet. Set up your admin account.

  Email: [email protected]
  Password: ********
  Confirm:  ********

  Authenticating... admin account created
  ✓ .mcp.json created
  ✓ .mcp.json added to .gitignore

  Done. Open Claude Code in this project — OpenTrace is connected.

No client install needed. Just curl and bash. The script creates .mcp.json in your project — Claude Code reads this file and connects to OpenTrace automatically.

3. Set up the SDK

Open Claude Code and ask:

"Set up opentrace for my project"

The agent detects your framework, installs the SDK, configures it with the correct API key, and verifies logs are flowing.

SDK Platform Install
opentrace Ruby / Rails gem 'opentrace'
@opentrace-sdk/node Node.js npm install @opentrace-sdk/node

The SDK sends structured logs, error traces, request performance data, and runtime metrics (memory, GC, threads) to OpenTrace automatically. Your app never blocks or crashes due to OpenTrace — all I/O is async with bounded queues.

4. Ask your agent anything

You're done. Start asking:

Question What happens
"What errors are happening in production?" Agent searches error groups, shows impact and stack traces
"Why is the payments endpoint slow?" Agent checks request performance, SQL stats, external API times
"Show me logs from the last hour with level ERROR" Agent searches logs with filters
"Is it safe to deploy this change?" Agent checks blast radius, code risk scores, recent errors
"Generate tests for the most common production errors" Agent creates regression tests from real error data
"Set up a watcher for checkout error rate > 1%" Agent creates a threshold alert
"What happened after the last deploy?" Agent checks deploy impact, error rate changes
"Invite [email protected] to opentrace" Agent creates a user account

What Can the Agent Do?

Search & Debug Logs

Full-text search across all services. Filter by level, service, trace ID, time range. Assemble distributed traces. Compare error rates between time periods.

Investigate Errors

Errors are automatically grouped by fingerprint. The agent sees occurrence counts, affected users, impact scores, and full stack traces. It can resolve or ignore error groups.

Query Your Database

Connect your Postgres databases (read-only). The agent runs EXPLAIN ANALYZE on slow queries, checks index health, detects lock contention, and identifies N+1 query patterns. All queries are validated SELECT-only via SQL AST parsing.

Monitor Uptime

Create HTTP health checks that run on a schedule. The agent sees uptime percentages, response times, and gets notified when endpoints go down.

Set Up Alerts

Create threshold watches on error rate, response time, request volume, SQL count, or cache hit rate. The agent can create watches for code it just deployed — self-monitoring its own changes.

Assess Code Risk

Every file and endpoint gets a risk score based on error frequency, investigation history, and change velocity. Before modifying a file, the agent checks its production behavior — call volume, error rate, latency percentiles.

Generate Tests from Real Errors

The agent creates regression tests using actual production error data — real inputs, real stack traces, real edge cases. Every test has a story: when the error happened, how many users it affected.

Track Deploys

The SDK sends the git commit hash with every log. OpenTrace detects deploys automatically when the commit hash changes. The agent correlates errors to specific commits.

Manage the Team

Invite users, revoke access, rotate API keys, view audit logs — all through conversation. No admin panel needed.


Adding Team Members

You:

"Invite [email protected] to opentrace"

The agent creates the account and gives you a temporary password. Send it to the developer securely.

The developer runs:

curl -s https://your-server.com/connect | bash

Enters their email and temporary password. They're connected. Each developer gets their own .mcp.json with a personal token.

To remove someone:

"Remove [email protected] from opentrace"

Their tokens are invalidated immediately across all projects.


MCP Tools

OpenTrace exposes 12 tools with 80+ actions via MCP. Each tool returns suggested_tools with pre-filled arguments so the agent knows what to call next.

Tool Actions What it does
logs search, context, stats, summary, performance, trace, compare Full-text log search, distributed trace assembly, N+1 detection
errors list, detail, investigate, impact, ranking, resolve, ignore Error grouping by fingerprint, user impact scoring, stack traces
database queries, explain, tables, activity, locks, indexes, schema, runbook Postgres introspection, EXPLAIN plans, composite investigation runbooks
watches status, create, delete, alerts, dismiss, investigate Threshold alerts on error rate, latency, request volume
overview status, triage, diagnose, timeline, investigate, changes, settings, notes, session_summary System health, alerts, incident timeline, settings, agent memory
analytics traffic, endpoints, heatmap, trends, movers Traffic patterns, endpoint performance, time-series analysis
code risk, fragile, test_gaps, annotate_file, gen_context, deps_risk Code risk scores, test generation, blast radius, production annotations
deploys history, impact, record Deploy tracking, error rate impact measurement
healthchecks list, uptime, create, delete HTTP endpoint monitoring with uptime tracking
servers list, query, health Server and process metrics (CPU, memory, GC)
admin update_retention, users, audit User management, retention, audit log (admin only)
setup status, detect, guide, verify SDK setup assistant — detects framework, provides config with API key

Security

Protection How
No self-registration First curl .../connect creates admin. Everyone else needs an invite.
Per-user tokens Each developer gets a personal MCP token, stored in their local .mcp.json. Revocable independently.
HTTPS via Caddy The install script sets up Caddy with automatic Let's Encrypt certificates. OpenTrace listens on localhost only.
Rate limiting Auth endpoints are rate-limited — 10 attempts per minute per IP, then blocked.
Read-only DB access All queries against your Postgres are validated SELECT-only via SQL AST parsing, with configurable timeouts and row limits.
API key auth SDK log ingestion requires a Bearer token.
No telemetry Fully self-hosted. No external calls. No tracking. Your data stays on your server.

Configuration

Server-side environment variables (.env file):

Variable Default Description
OPENTRACE_LISTEN_ADDR 127.0.0.1:8080 HTTP listen address
OPENTRACE_DATA_DIR ~/.opentrace SQLite database directory
OPENTRACE_API_KEY (auto-generated) Bearer token for SDK log ingestion
OPENTRACE_MAX_QUERY_ROWS 500 Max rows returned from SQL queries
OPENTRACE_STATEMENT_TIMEOUT_MS 5000 SQL query timeout in milliseconds
OPENTRACE_TRUSTED_PROXIES (empty) Comma-separated proxy IPs for rate limiting
OPENTRACE_CORS_ORIGINS (empty) Allowed origins for browser requests

See .env.example for all options.


Server Commands

Run on the server only:

opentrace init      Initialize the database (first-time setup)
opentrace serve     Start the server
opentrace mcp       Start MCP stdio server (for local development)
opentrace seed      Populate sample data (development only)
opentrace backup    Create a SQLite database backup
opentrace restore   Restore from a backup file

No client-side install. Connect with curl, manage everything through your AI assistant.


How It's Built

  • Go — single binary, no runtime dependencies, cross-compiled for Linux and macOS
  • SQLite — zero-dependency database with WAL mode and FTS5 for full-text log search
  • MCP — native Model Context Protocol with Streamable HTTP and SSE transports
  • Pure Go — no CGO, no system dependencies, go build and ship

Development

git clone https://github.com/adham90/opentrace.git && cd opentrace
cp .env.example .env
go build -o opentrace ./cmd/opentrace
./opentrace serve
go test -short -race ./...    # unit tests
go vet ./...                  # linting

License

MIT — use it however you want.

Yorumlar (0)

Sonuc bulunamadi