aimock

mcp
Guvenlik Denetimi
Gecti
Health Gecti
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 121 GitHub stars
Code Gecti
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
This tool provides a unified mock server for testing AI-powered applications, allowing developers to simulate and control responses from LLM APIs, vector databases, and agent protocols like MCP and A2A.

Security Assessment
The overall risk is Low. As a local testing utility, it intentionally spawns a local server to intercept network requests—meaning network activity is both expected and entirely contained to the developer's environment. The automated code scan evaluated 12 files and found no dangerous patterns, hardcoded secrets, or requests for excessive permissions. Because it is designed to replace real infrastructure during testing, it does not access live sensitive data or execute unexpected shell commands.

Quality Assessment
The project demonstrates strong health and maintenance indicators. It is licensed under the standard MIT license, and the repository is highly active with its most recent push occurring today. It has accrued 121 GitHub stars, showing a solid baseline of community trust for a niche development tool. The README is well-documented and highlights robust software engineering practices, including dedicated unit tests, drift detection, and CI integration badges.

Verdict
Safe to use.
SUMMARY

Mock everything your AI app talks to — LLM APIs, MCP, A2A, vector DBs, search. One package, one port, zero dependencies.

README.md

aimock Unit Tests Drift Tests npm version

Mock infrastructure for AI application testing — LLM APIs, MCP tools, A2A agents, vector databases, search, rerank, and moderation. One package, one port, zero dependencies.

Quick Start

npm install @copilotkit/aimock
import { LLMock } from "@copilotkit/aimock";

const mock = new LLMock({ port: 0 });
mock.onMessage("hello", { content: "Hi there!" });
await mock.start();

process.env.OPENAI_BASE_URL = `${mock.url}/v1`;

// ... run your tests ...

await mock.stop();

The aimock Suite

aimock mocks everything your AI app talks to:

Tool What it mocks Docs
LLMock OpenAI, Claude, Gemini, Bedrock, Azure, Vertex AI, Ollama, Cohere Providers
MCPMock MCP tools, resources, prompts with session management MCP
A2AMock Agent-to-agent protocol with SSE streaming A2A
VectorMock Pinecone, Qdrant, ChromaDB compatible endpoints Vector
Services Tavily search, Cohere rerank, OpenAI moderation Services

Run them all on one port with npx aimock --config aimock.json, or use the programmatic API to compose exactly what you need.

Features

  • Record & Replay — Proxy real APIs, save as fixtures, replay deterministically forever
  • 11 LLM Providers — OpenAI, Claude, Gemini, Bedrock, Azure, Vertex AI, Ollama, Cohere — full streaming support
  • MCP / A2A / Vector — Mock every protocol your AI agents use
  • Chaos Testing — 500 errors, malformed JSON, mid-stream disconnects at any probability
  • Drift Detection — Daily CI validation against real APIs
  • Streaming Physics — Configurable ttft, tps, and jitter
  • WebSocket APIs — OpenAI Realtime, Responses WS, Gemini Live
  • Prometheus Metrics — Request counts, latencies, fixture match rates
  • Docker + Helm — Container image and Helm chart for CI/CD
  • Zero dependencies — Everything from Node.js builtins

CLI

# LLM mocking only
npx aimock -p 4010 -f ./fixtures

# Full suite from config
npx aimock --config aimock.json

# Record mode: proxy to real APIs, save fixtures
npx aimock --record --provider-openai https://api.openai.com

# Docker
docker run -d -p 4010:4010 -v ./fixtures:/fixtures ghcr.io/copilotkit/aimock -f /fixtures

Switching from other tools?

Step-by-step migration guides: MSW · VidaiMock · mock-llm · Python mocks · Mokksy

Documentation

https://aimock.copilotkit.dev

Real-World Usage

AG-UI uses aimock for its end-to-end test suite, verifying AI agent behavior across LLM providers with fixture-driven responses.

License

MIT

Yorumlar (0)

Sonuc bulunamadi