inkog

mcp
Security Audit
Fail
Health Pass
  • License — License: Apache-2.0
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 28 GitHub stars
Code Fail
  • fs module — File system access in action.yml
  • Hardcoded secret — Potential hardcoded credential in scripts/e2e_verify.sh
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This tool is a static analysis security co-pilot designed to scan AI agent code. It helps developers identify logic bugs, prompt injection vulnerabilities, and compliance gaps before deployment.

Security Assessment
The tool performs file system access to read and scan your code. While the rule-based scan flagged a potential hardcoded credential, this was found inside an end-to-end testing script (`scripts/e2e_verify.sh`), which is a common and generally safe practice for test environments rather than a core application vulnerability. No dangerous broad permissions were requested. Users should be aware that by design, static analysis tools require read access to your codebase. Additionally, the tool offers a cloud-based scanning option, which involves transmitting your code over the network unless run locally.
Overall Risk Rating: Low to Medium (safe for local execution, but treat cloud scanning with standard caution).

Quality Assessment
The project is actively maintained, with repository updates pushed as recently as today. It carries the permissive Apache-2.0 license and has a solid, growing community trust evidenced by 28 GitHub stars. The comprehensive documentation, multi-language support, and recent active releases indicate a professional and reliable maintenance standard.

Verdict
Use with caution—safe to run locally for scanning code, but be mindful of file access and review how test scripts handle credentials.
SUMMARY

The pre-flight check for AI agents

README.md

Inkog

The security co-pilot for AI agent development.

Build secure AI agents from the start. Scan for logic bugs, prompt injection, missing guardrails, and compliance gaps — before they reach production.

English · 简体中文 · 日本語 · 한국어 · Español · Português · Deutsch · Français

Release License Go Report Card CI Slack

Inkog scanning AI agent code for vulnerabilities


AI agents can loop forever, drain your API budget in minutes, execute arbitrary code from user input, or make high-stakes decisions with zero human oversight. Most of these flaws pass code review because they look like normal code — the danger is in the runtime behavior.

Inkog scans your agent code statically and catches these problems before deployment. One command, works across 21 frameworks, maps findings to EU AI Act and OWASP LLM Top 10.

Want to try it without installing anything? Drop a repo URL into the free scanner at inkog.io — no signup required.

What's New in v1.2.0

  • Deep scan (inkog -deep) — orchestrator-driven analysis that catches subtle logic flaws pattern matching misses
  • Skill & MCP scanning — audit SKILL.md packages and MCP servers for tool poisoning, command injection, excessive permissions
  • MLBOM generation — Machine Learning Bill of Materials for your agent
  • AG2 + Azure AI Foundry support
  • AI provider secret detection — Anthropic, Gemini, Groq, HuggingFace
  • CI/CD templates — GitLab CI, Azure DevOps, Jenkins (alongside the existing GitHub Action)

See the full CHANGELOG.

When to Use Inkog

  • Building an AI agent — Scan during development to catch infinite loops, prompt injection, and missing guardrails before they ship
  • Adding security to CI/CD — Add inkog-io/inkog@v1 to GitHub Actions for automated security gates on every PR
  • Preparing for EU AI Act — Generate compliance reports mapping your agent to Article 14, NIST AI RMF, OWASP LLM Top 10
  • Reviewing agent code — Use from Claude Code, Cursor, or any MCP client to get security analysis while you code
  • Auditing MCP servers — Check any MCP server for tool poisoning, privilege escalation, or data exfiltration before installing
  • Verifying AGENTS.md — Validate that governance declarations match actual code behavior
  • Scanning Skill packages — Audit SKILL.md packages for tool poisoning, command injection, and excessive permissions before adding to your agent
  • Generating an MLBOM — Create a Machine Learning Bill of Materials documenting your agent's components, tools, and data flows
  • Building multi-agent systems — Detect delegation loops, privilege escalation, and unauthorized handoffs between agents (A2A audit)

Quick Start

Try it on a repo from your browser — inkog.io/scan. No signup, no install.

For local scans, no install needed:

npx -y @inkog-io/cli scan .

Or install permanently:

Method Command
Install script curl -fsSL https://inkog.io/install.sh | sh
Homebrew brew tap inkog-io/inkog && brew install inkog
Go go install github.com/inkog-io/inkog/cmd/inkog@latest
Binary Download from Releases
# Get your free API key at https://app.inkog.io
export INKOG_API_KEY=sk_live_...

inkog .

What It Catches

Category Examples Why it matters
Infinite loops Agent re-calls itself with no exit condition, LLM output fed back as input without a cap Your agent runs forever and racks up API costs
Prompt injection User input flows into system prompt unsanitized, tainted data reaches tool calls Attackers can hijack your agent's behavior
Missing guardrails No human-in-the-loop for destructive actions, no rate limits on LLM calls, unconstrained tool access One bad decision and your agent goes rogue
Hardcoded secrets API keys, tokens, and passwords in source code (detected locally, never uploaded) Credentials leak when you push to GitHub
Compliance gaps Missing human oversight (EU AI Act Article 14), no audit logging, missing authorization checks You're legally required to have these controls by August 2026

Full detection catalog →

Supported Frameworks

Code-first: LangChain · LangGraph · CrewAI · AutoGen · AG2 · OpenAI Agents · Semantic Kernel · Azure AI Foundry · LlamaIndex · Haystack · DSPy · Phidata · Smolagents · PydanticAI · Google ADK

No-code: n8n · Flowise · Langflow · Dify · Microsoft Copilot Studio · Salesforce Agentforce

GitHub Actions

- uses: inkog-io/inkog@v1
  with:
    api-key: ${{ secrets.INKOG_API_KEY }}
    sarif-upload: true   # Shows findings in GitHub Security tab

Full CI/CD docs → | Complete workflow example →

Deep scan

Run an advanced orchestrator-based analysis with enriched findings, an agent profile, compliance coverage, and a premium HTML report:

inkog -deep .
inkog -deep -output html . > deep-report.html

Requires the Inkog Deep role. Deep scan docs →

Skill & MCP scan

Scan SKILL.md packages, agent tools, and MCP servers for vulnerabilities:

# Scan a skill package
inkog skill-scan .
inkog skill-scan --repo https://github.com/org/repo

# Scan an MCP server by registry name
inkog mcp-scan github
inkog mcp-scan github --repo https://github.com/org/mcp-server

# Deep scan either
inkog skill-scan --deep .
inkog mcp-scan --deep --repo https://github.com/org/mcp-server

Skill & MCP scan docs →

Scan policies
inkog . --policy low-noise        # Only proven vulnerabilities
inkog . --policy balanced          # Vulnerabilities + risk patterns (default)
inkog . --policy comprehensive     # Everything including hardening tips
inkog . --policy governance        # Article 14 controls, authorization, audit trails
inkog . --policy eu-ai-act         # EU AI Act compliance report

Policy reference →

MCP Server

Scan agent code directly from Claude, ChatGPT, or Cursor:

npx -y @inkog-io/mcp

7 tools including MCP server auditing, Skill package scanning, and multi-agent topology analysis. MCP docs →

Inkog Red — Coming Soon

Automated adversarial testing for AI agents. Inkog Red probes your running agents with prompt injection, jailbreaks, and tool misuse attacks to validate that defenses hold under real-world conditions.

Join the waitlist →

Community

Star History

Star History Chart

License

Apache 2.0 — See LICENSE

Reviews (0)

No results found