visor

mcp
Guvenlik Denetimi
Basarisiz
Health Gecti
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 19 GitHub stars
Code Basarisiz
  • Hardcoded secret — Potential hardcoded credential in .github/workflows/ai-battle-test.yml
  • rm -rf — Recursive force deletion command in .github/workflows/build-dist.yml
  • rm -rf — Recursive force deletion command in .github/workflows/ci.yml
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
This tool is an AI workflow engine that orchestrates code reviews, AI providers, and automation tasks using YAML-driven pipelines. It can run across multiple platforms, including CLI, GitHub Actions, Slack bots, and HTTP APIs.

Security Assessment
Overall Risk: Medium. The engine is designed to execute shell commands, make external network requests, and connect to AI providers, which naturally requires access to sensitive data like API keys. The automated scan flagged a potential hardcoded credential in a GitHub Actions workflow file (`ai-battle-test.yml`). Additionally, `rm -rf` recursive force deletion commands were found in the CI/CD build scripts. While the hardcoded secret and deletion commands currently appear confined to testing and deployment scripts rather than the core application, any tool with arbitrary command execution capabilities warrants careful handling. No inherently dangerous account permissions are requested.

Quality Assessment
The project is actively maintained, with its most recent code push occurring today. It uses a standard, permissive MIT license. However, it currently has a low community trust level, reflected by only 19 GitHub stars, meaning it has not yet been widely battle-tested by a large user base.

Verdict
Use with caution — review configuration files carefully to avoid exposing secrets, and strictly limit the tool's runtime permissions since it has broad capabilities to execute commands and make external network requests.
SUMMARY

Visor — AI workflow engine for code review, assistants, and automation. Orchestrates checks, MCP tools, and AI providers with YAML-driven pipelines. Runs as GitHub Action, CLI, Slack bot, or HTTP API.

README.md
Visor Logo

Visor — AI workflow engine for code review, assistants & automation

TypeScript
Node
License

Orchestrate checks, MCP tools, and AI providers with YAML-driven pipelines.
Runs as GitHub Action, CLI, Slack bot, Telegram bot, or HTTP API.


Visor is an open-source workflow engine that lets you define multi-step AI pipelines in YAML. Wire up shell commands, AI providers, MCP tools, HTTP calls, and custom scripts into dependency-aware DAGs — then run them from your terminal, CI, Slack, Telegram, Email, WhatsApp, Teams, or an HTTP endpoint.

What you get out of the box:

  • YAML-driven pipelines — define checks, transforms, routing, and AI prompts in a single config file.
  • 8 runtime modes — CLI, GitHub Action, Slack bot, Telegram bot, Email, WhatsApp, Teams, HTTP server — same config, any surface.
  • 17 provider typesai, command, script, mcp, utcp, http, claude-code, a2a, github, memory, workflow, and more.
  • AI orchestration — multi-provider (Gemini, Claude, OpenAI, Bedrock), session reuse, MCP/UTCP tool calling, retry & fallback.
  • Execution engine — dependency DAGs, parallel waves, forEach fan-out, conditional routing, failure auto-remediation.
  • Built-in testing — YAML-native integration tests with fixtures, mocks, and assertions.

What do you want to build?

Goal Start here Example
Code review on PRs Guide: Code Review Pipeline quick-start-tags.yaml
AI agent with tools Guide: AI Agent ai-custom-tools-simple.yaml
Multi-step automation Workflow Creation Guide enhanced-config.yaml
Chat assistant / Bot Bot Integrations teams-assistant.yaml
Run shell commands + AI Command Provider ai-with-bash.yaml
Connect MCP tools MCP Provider mcp-provider-example.yaml
Call tools via UTCP UTCP Provider utcp-provider-example.yaml
Add API integrations (TDD) Guide: TDD Assistant Workflows workable.tests.yaml

First time? Run npx visor init to scaffold a working config, then npx visor to run it.

Table of Contents

Requirements: Node.js 18+ (CI runs Node 20).

🚀 Quick Start

Install & Run

# Install
npm i -D @probelabs/visor

# Scaffold a starter config (pick a template)
npx visor init                  # interactive picker
npx visor init code-review      # PR review pipeline
npx visor init agent            # AI agent with tools
npx visor init automation       # multi-step pipeline
npx visor init assistant        # chat assistant / Slack bot

# Run
npx visor                       # run all steps
npx visor --tags fast           # run steps tagged "fast"
npx visor validate              # check config for errors

Or one-off without installing: npx -y @probelabs/visor@latest --check all --output table

Minimal Config (.visor.yaml)

version: "1.0"
steps:
  security:
    type: ai
    prompt: "Identify security issues in changed files"
    tags: ["fast", "security"]

  run-tests:
    type: command
    exec: npm test
    depends_on: [security]

  notify:
    type: http
    method: POST
    url: https://hooks.slack.com/...
    body: '{ "text": "Tests {{ outputs[''run-tests''].status }}" }'
    depends_on: [run-tests]

As a GitHub Action

# .github/workflows/visor.yml
name: Visor
on:
  pull_request: { types: [opened, synchronize] }
  issues: { types: [opened] }
  issue_comment: { types: [created] }
permissions:
  contents: read
  pull-requests: write
  issues: write
  checks: write
jobs:
  visor:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: probelabs/visor@v1
        env:
          GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}

Tip: Pin releases for stability with @v1. For bleeding-edge, use @nightly.

🤖 AI Assistant Framework

Visor ships with a built-in assistant framework — three composable workflows for building AI-powered assistants with skills, tools, and multi-repo code exploration. Import them with a single line:

version: "1.0"

imports:
  - visor://assistant.yaml

checks:
  chat:
    type: workflow
    workflow: assistant
    assume: ["true"]
    args:
      question: "{{ conversation.current.text }}"
      system_prompt: "You are a helpful engineering assistant."
      intents:
        - id: chat
          description: general Q&A or small talk
        - id: code_help
          description: questions about code or architecture
          default_skills: [code-explorer]
      skills:
        - id: code-explorer
          description: needs codebase exploration or code search
          tools:
            code-talk:
              workflow: code-talk
              inputs:
                projects:
                  - id: backend
                    repo: my-org/backend
                    description: Backend API service
          allowed_commands: ['git:log:*', 'git:diff:*']
    on_success:
      goto: chat
Workflow What it does
assistant Full AI assistant — intent routing, dynamic skill activation, tool orchestration, knowledge injection, bash command control
code-talk Multi-repo code exploration — routes questions to repos, checks out code, explores with tools, returns answers with file references and confidence scoring
intent-router Lightweight intent classification — picks intent, rewrites question, selects skills/tags

The visor:// protocol resolves to bundled workflows shipped with the package — no network fetch needed.

Learn more: docs/assistant-workflows.md | Examples: code-talk-workflow · code-talk-as-tool · intent-router

🖥️ Runtime Modes

Visor runs the same YAML config across multiple surfaces:

Mode How to run Best for
CLI visor --check all --output table Local dev, CI pipelines
GitHub Action uses: probelabs/visor@v1 PR reviews, issue triage, annotations
Slack bot visor --slack --config .visor.yaml Team assistants, ChatOps
Telegram bot visor --telegram --config .visor.yaml Personal assistants, group bots
Email bot visor --email --config .visor.yaml Email assistants, threaded conversations
WhatsApp bot visor --whatsapp --config .visor.yaml WhatsApp assistants, customer support
Teams bot visor --teams --config .visor.yaml Enterprise assistants, team ChatOps
HTTP server http_server: { enabled: true, port: 8080 } Webhooks, API integrations

See Bot Integrations for a comparison of all bot transports.

Additional modes:

  • TUI — interactive chat-style terminal UI: visor --tui
  • SDK — programmatic Node.js API: import { runChecks } from '@probelabs/visor/sdk'
  • Scheduler — cron-based execution with database-backed persistence
# CLI examples
visor --check all --output table
visor --tags fast,local --max-parallelism 5
visor --analyze-branch-diff                   # PR-style diff analysis
visor --event pr_updated                      # Simulate GitHub events
visor --tui --config ./workflow.yaml          # Interactive TUI
visor --debug-server --debug-port 3456        # Live web debugger
visor config snapshots                        # Config version history
visor validate                                # Validate config
visor test --progress compact                 # Run integration tests

Run modes: Default is CLI mode everywhere. For GitHub-specific behavior (comments, checks, annotations), run with --mode github-actions or set mode: github-actions in the Action. Force CLI mode inside Actions with VISOR_MODE=cli.

See docs/commands.md for the full CLI reference.

💬 PR Comment Commands

Trigger reviews and assistant actions via comments on PRs or issues:

/review                        # Re-run all checks
/review --check security       # Re-run specific check
/visor how does caching work?  # Ask the built-in assistant

Learn more: docs/commands.md

🧩 Core Concepts

Concept What it is
Step (or Check) Unit of work — a shell command, AI call, HTTP request, script, etc.
Provider How a step runs: ai, command, script, mcp, utcp, http, claude-code, github, memory, workflow, …
depends_on Execution order — independents run in parallel, dependents wait.
forEach Fan-out — transform output into an array, run dependents per item.
Routing on_fail, on_success, goto, retry — conditional flow with loop safety.
Transform Reshape output with Liquid templates or JavaScript before passing downstream.
Schema JSON Schema that validates step output (e.g., code-review).
Template Renders validated output into Markdown/table for PR comments.
Group Which PR comment a step posts into.
Tags Label steps and filter with --tags fast,local.
Events Trigger steps on PRs, issues, comments, webhooks, or cron schedules.

🔌 Provider Types

Provider Description Example use
ai Multi-provider AI (Gemini, Claude, OpenAI, Bedrock) Code review, analysis, generation
command Shell commands with Liquid templating Run tests, build, lint
script JavaScript in a secure sandbox Transform data, custom logic
mcp MCP tool execution (stdio/SSE/HTTP) External tool integration
utcp UTCP tool execution (HTTP/CLI/SSE) Direct tool calling via manuals
claude-code Claude Code SDK with MCP tools Deep code analysis, refactoring
http HTTP output/webhook sender Notify Slack, trigger CI
http_input Webhook receiver Accept external events
http_client HTTP API client Call external APIs
github GitHub operations (labels, comments, checks) Label PRs, post reviews
memory Key-value store (get/set/append/increment) State across steps
workflow Reusable sub-workflows from files/URLs Compose pipelines
human-input Interactive prompts (TUI/Slack) Approvals, user input
log / logger Structured logging Debug, audit trail
noop No-op placeholder Orchestration nodes
git-checkout Git operations (clone, checkout, worktree) Multi-repo workflows

See docs/pluggable.md for building custom providers.

⚙️ Orchestration

Dependencies & Parallel Execution

Steps without dependencies run in parallel waves. depends_on enforces ordering:

steps:
  fetch-data:
    type: command
    exec: curl -s https://api.example.com/data

  analyze:
    type: ai
    prompt: "Analyze: {{ outputs['fetch-data'] }}"
    depends_on: [fetch-data]

  report:
    type: command
    exec: 'echo "Done: {{ outputs[''analyze''] | truncate: 100 }}"'
    depends_on: [analyze]

forEach Fan-Out

Transform output into an array, run dependents once per item:

steps:
  list-services:
    type: command
    exec: 'echo ''["auth","payments","notifications"]'''
    forEach: true

  check-service:
    type: command
    exec: 'curl -s https://{{ outputs["list-services"] }}/health'
    depends_on: [list-services]

Use outputs_raw in downstream steps to access the aggregated array of all forEach results:

  summarize:
    type: script
    depends_on: [list-services]
    content: |
      const arr = outputs_raw['list-services'] || [];
      return { total: arr.length };

Learn more: docs/foreach-dependency-propagation.md

Failure Routing & Auto-Remediation

Steps can retry, run remediation, or jump to other steps on failure:

version: "2.0"
routing:
  max_loops: 5
steps:
  build:
    type: command
    exec: make build
    on_fail:
      retry: { max: 2, backoff: { mode: exponential, delay_ms: 500 } }
      goto: setup            # Jump back on exhausted retries

  deploy:
    type: command
    exec: make deploy
    depends_on: [build]
    on_success:
      run: [notify]          # Run extra steps on success
    on_fail:
      goto_js: |
        return attempt <= 2 ? 'build' : null;  # Dynamic routing

Learn more: docs/failure-routing.md

Conditional Execution & Author Permissions

steps:
  security-scan:
    type: command
    exec: npm audit
    if: "!hasMinPermission('MEMBER')"    # Only for external contributors

  auto-approve:
    type: github
    op: labels.add
    values: ["approved"]
    if: "hasMinPermission('COLLABORATOR') && totalIssues === 0"

  protect-secrets:
    type: command
    exec: echo "Checking permissions..."
    fail_if: "!isMember() && files.some(f => f.filename.startsWith('secrets/'))"

Available permission functions: hasMinPermission(level), isOwner(), isMember(), isCollaborator(), isContributor(), isFirstTimer().

Learn more: docs/author-permissions.md

🤖 AI & MCP

Multi-Provider AI

steps:
  review:
    type: ai
    prompt: "Review this code for security issues"
    ai:
      provider: anthropic          # or: google, openai, bedrock
      model: claude-sonnet-4-20250514
      fallback:
        strategy: any              # Try other providers on failure

Supported providers: Google Gemini, Anthropic Claude, OpenAI GPT, AWS Bedrock.

Set one key via environment: GOOGLE_API_KEY, ANTHROPIC_API_KEY, OPENAI_API_KEY, or AWS credentials.

MCP Tool Integration

Give AI steps access to MCP tools, or call MCP tools directly:

# AI step with MCP tools
steps:
  analyze:
    type: ai
    prompt: "Use the search tool to find security patterns"
    ai:
      mcp_servers:
        - name: code-search
          command: npx
          args: ["-y", "@probe/search"]

# Direct MCP tool execution
  search:
    type: mcp
    transport: stdio
    command: npx
    args: ["-y", "@probe/search"]
    method: search
    arguments:
      query: "{{ outputs['setup'].pattern }}"

AI Session Reuse

Chain AI conversations across steps:

steps:
  security:
    type: ai
    prompt: "Find security issues"

  remediation:
    type: ai
    prompt: "Suggest fixes for the issues you found"
    depends_on: [security]
    reuse_ai_session: true          # Carries conversation history
    session_mode: append            # Or: clone (default)

Claude Code Provider

Full Claude Code SDK integration with MCP tools and subagents:

steps:
  deep-review:
    type: claude-code
    prompt: "Analyze code complexity and suggest refactoring"
    max_turns: 10
    mcp_servers:
      - name: filesystem
        command: npx
        args: ["-y", "@modelcontextprotocol/server-filesystem", "."]

Learn more: docs/claude-code.md · docs/mcp-provider.md · docs/advanced-ai.md

🧰 GitHub Provider

Native GitHub operations (labels, comments, checks) without shelling out to gh:

steps:
  apply-labels:
    type: github
    op: labels.add
    values:
      - "{{ outputs.overview.tags.label | default: '' | safe_label }}"
    value_js: |
      return values.filter(v => typeof v === 'string' && v.trim().length > 0);

Learn more: docs/github-ops.md

🧬 Templating & Transforms

Liquid Templates

Steps can use Liquid templates in prompts, exec commands, HTTP bodies, and more:

steps:
  greet:
    type: command
    exec: 'echo "Files changed: {{ files | size }}, branch: {{ branch }}"'

  post-results:
    type: http
    url: https://api.example.com/results
    body: |
      { "issues": {{ outputs["review"] | json }},
        "pr": {{ pr.number }} }

Available context: outputs, outputs_raw, inputs, pr, files, env, memory, branch, event, conversation.

JavaScript Transforms

Transform step output before passing to dependents:

steps:
  fetch:
    type: command
    exec: 'node -e "console.log(JSON.stringify({items:[1,2,3]}))"'
    transform_js: |
      return output.items.filter(i => i > 1);

Dynamic Routing with JavaScript

steps:
  check:
    type: command
    exec: npm test
    on_fail:
      goto_js: |
        if (attempt > 3) return null;   // Give up
        return 'fix-and-retry';         // Jump to remediation

Prompts can live in external files with full Liquid variable access:

steps:
  overview:
    type: ai
    schema: code-review
    prompt: ./prompts/overview.liquid

Learn more: docs/liquid-templates.md · docs/schema-templates.md

🔇 Suppressing Warnings

Suppress a specific issue by adding a nearby visor-disable comment:

const testPassword = "demo123"; // visor-disable

Learn more: docs/suppressions.md

🧪 Testing Framework

Write and run integration tests for your Visor config in YAML:

# .visor.tests.yaml
tests:
  - name: "Security check finds issues"
    config: .visor.yaml
    steps:
      security:
        mock_output: '{"issues": [{"severity": "high"}]}'
    assertions:
      - step: security
        called: { exactly: 1 }
      - step: security
        output_contains: "high"
visor test --progress compact          # Run tests
visor test --list                      # List test cases
visor test --only "Security*"          # Filter tests
visor test --bail                      # Stop on first failure

Docs: Getting started · DSL reference · Fixtures & mocks · Assertions · Cookbook

📦 SDK (Programmatic Usage)

Run Visor programmatically from Node.js:

import { loadConfig, runChecks } from '@probelabs/visor/sdk';

const config = await loadConfig('.visor.yaml');
const result = await runChecks({
  config,
  checks: Object.keys(config.checks || {}),
  output: { format: 'json' },
});
console.log('Issues:', result.reviewSummary.issues?.length ?? 0);

Learn more: docs/sdk.md

🔧 Configuration

Config Loading Order

  1. CLI --config flag
  2. .visor.yaml in project root
  3. Built-in defaults

Extending Configs

extends:
  - default
  - ./team-standards.yaml
  - https://raw.githubusercontent.com/org/policies/main/base.yaml

Dynamic Config Reloading

Long-running modes (Slack, Telegram, Email, HTTP) support live config reload:

visor --slack --config .visor.yaml --watch       # Auto-reload on file change
visor --telegram --config .visor.yaml --watch    # Telegram with hot reload
visor --email --config .visor.yaml --watch       # Email with hot reload
visor config snapshots                        # List config versions
visor config diff 1 2                         # Diff two snapshots

Key Config Options

version: "1.0"
max_parallelism: 3            # Concurrent steps
max_ai_concurrency: 3         # Concurrent AI API calls
routing:
  max_loops: 10               # Loop safety limit

http_server:
  enabled: true
  port: 8080
  auth: { bearer_token: "${WEBHOOK_SECRET}" }

telemetry:
  enabled: true
  sink: otlp                  # or: file, console

steps:
  # ... your pipeline

Where Things Go (Quick Reference)

A common source of confusion is where to put AI settings. Here's the map:

version: "1.0"

# ── Global defaults (top level) ──────────────────────
ai_provider: google              # default AI provider for all steps
ai_model: gemini-2.5-flash       # default model for all steps

steps:
  my-step:
    type: ai
    prompt: "Analyze the code"

    # ── Per-step overrides (step level) ──────────────
    ai_provider: anthropic       # override provider for this step
    ai_model: claude-sonnet-4-20250514    # override model for this step
    ai_system_prompt: "You are..." # system prompt shorthand

    # ── OR use the ai: block for full config ─────────
    ai:
      provider: anthropic
      model: claude-sonnet-4-20250514
      system_prompt: "You are a senior engineer."
      retry:
        maxRetries: 3
      fallback:
        providers: [{ provider: google, model: gemini-2.5-flash }]

Common mistakes: system_prompt at step level (ignored — use ai_system_prompt or put it inside ai:). Top-level ai: block (not supported — use ai_provider/ai_model). parseJson on command steps (commands auto-parse JSON). Run visor validate to catch these.

Learn more: docs/ai-configuration.md · docs/configuration.md

👀 Observability

Output Formats

visor --output table                          # Terminal-friendly (default)
visor --output json --output-file results.json
visor --output sarif --output-file results.sarif
visor --output markdown

OpenTelemetry Tracing

telemetry:
  enabled: true
  sink: otlp
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4318/v1/traces visor --check all

Span hierarchy: visor.runengine.state.*visor.check.*visor.foreach.item

Debug Tools

visor --debug                                 # Verbose logging
visor --debug-server --debug-port 3456        # Live web visualizer

Quick debugging tips:

Use log() in JavaScript expressions (if, fail_if, transform_js):

if: |
  log("Outputs:", outputs);
  outputs["fetch-data"]?.status === "ready"

Use the json filter in Liquid to inspect objects:

type: logger
message: "Outputs: {{ outputs | json }}"

TUI mode (visor --tui): Press Tab to switch between Chat and Logs tabs, q to exit.

Learn more: docs/observability.md · docs/debugging.md · docs/debug-visualizer.md

🔐 Security

  • GitHub App support for scoped, auditable access
  • Remote extends allowlist to control external config sources
  • MCP method filtering — allow/deny lists with wildcards
  • Bash allow/deny patterns for AI-driven command execution
  • Docker & process sandboxes for isolated step execution
  • Author permissionshasMinPermission(), isMember(), etc. for role-based logic
  • Environment filtering — control which env vars steps can access
visor --no-remote-extends
visor --allowed-remote-patterns "https://raw.githubusercontent.com/myorg/"

Learn more: docs/security.md · docs/author-permissions.md

🏢 Enterprise Policy Engine (EE)

Enterprise Edition. Requires a Visor EE license. Contact [email protected].

OPA-based policy enforcement for gating checks, MCP tools, and AI capabilities:

policy:
  engine: local
  rules: ./policies/
  fallback: deny
  roles:
    admin: { author_association: [OWNER] }
    developer: { author_association: [MEMBER, COLLABORATOR] }

Learn more: docs/enterprise-policy.md

📚 Further Reading

Getting started:
Configuration · AI config · CLI commands · GitHub Auth · CI/CLI mode · GitHub Action reference · Migration · FAQ · Glossary

Guides:
Tools & Toolkits · Assistant workflows · TDD for assistant workflows · Workflow creation · Workflow style guide · Dependencies · forEach propagation · Failure routing · Router patterns · Lifecycle hooks · Liquid templates · Schema-template system · Fail conditions · Failure conditions schema · Failure conditions impl · Timeouts · Execution limits · Event triggers · Output formats · Output formatting · Default output schema · Output history · Reusable workflows · Criticality modes · Fault management

Providers:
A2A · Command · Script · MCP · UTCP · MCP tools for AI · Claude Code · AI custom tools · AI custom tools usage · Custom tools · GitHub ops · Git checkout · HTTP integration · Memory · Human input · Custom providers

Operations:
Security · Performance · Observability · Debugging · Debug visualizer · Telemetry setup · Dashboards · Troubleshooting · Suppressions · GitHub checks · Bot integrations · Slack · Telegram · Email · WhatsApp · Teams · Scheduler · Sandbox engines

Testing:
Getting started · DSL reference · Flows · Fixtures & mocks · Assertions · Cookbook · TDD for assistants · CLI & reporters · CI integration · Troubleshooting

Enterprise:
Licensing · Enterprise policy · Scheduler storage · Database operations · Capacity planning · Production deployment · Deployment

Architecture & RFCs:
Architecture · Contributing · Failure routing RFC · Bot transports RFC · Debug visualizer RFC · Debug visualizer progress · Engine state machine plan · Engine pause/resume RFC · Event-driven GitHub RFC · Execution statistics RFC · Telemetry tracing RFC · Test framework RFC · SDK RFC · Goto/forward run plan · Loop routing refactor · Schema next PR · Fact validator gap analysis · Fact validator plan

Recipes & examples:
Recipes · Dev playbook · Tag filtering · Author permissions · Session reuse · SDK

🤝 Contributing

Learn more: CONTRIBUTING.md

📄 License

MIT License — see LICENSE


Made with ❤️ by Probe Labs

Yorumlar (0)

Sonuc bulunamadi