viewgraph
Health Uyari
- License — License: AGPL-3.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 5 GitHub stars
Code Gecti
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
This tool captures live web pages and converts them into structured UI context. It provides this data to AI coding assistants via the Model Context Protocol (MCP) to help developers generate tests, audit interfaces, and compare visual changes.
Security Assessment
Overall risk: Medium. The tool inherently accesses sensitive data by capturing the DOM, layout, styles, and metadata from live web pages. If used on authenticated pages or internal development environments, it could potentially expose secrets, session tokens, or private user data to your configured AI agent. The automated code scan (12 files) found no hardcoded secrets, dangerous execution patterns, or shell command invocations. It operates without requiring dangerous system permissions. However, because it acts as a bridge between rich browser data and external AI models, developers must be careful about which pages they capture.
Quality Assessment
The project has a standard open-source license (AGPL-3.0) and the codebase passed a light security audit without issues. It appears to be actively developed, with its last repository push occurring today. However, community trust and visibility are currently very low. With only 5 GitHub stars, it is likely a very early-stage or niche project, meaning it has undergone minimal public scrutiny and testing.
Verdict
Use with caution: the underlying code is safe, but you should strictly avoid capturing authenticated pages or internal environments containing sensitive data.
ViewGraph is a developer super assistant that captures live web pages as structured, agent-ready UI context, combining layout, DOM-like structure, selectors, styles, spatial relationships, screenshots, and metadata to help coding agents generate tests, audit interfaces, compare changes, and build faster.
The UI context layer for agentic coding.
Built with Kiro, for Kiro - and every MCP-compatible agent.
Browser extension + MCP server for AI-powered UI capture, auditing, and annotation.
ViewGraph captures structured DOM snapshots from any web page and exposes them to AI coding assistants via the Model Context Protocol. Agents can query page structure, audit accessibility, find missing test IDs, compare captures, track regressions, and act on human annotations - all through 38 MCP tools.
Works with any MCP-compatible agent: Kiro, Claude Code, Cursor, Windsurf, Cline, Aider, and more. No agent-specific code - pure MCP protocol. Tools that don't support MCP can read .viewgraph.json capture files directly from disk.
Components
| Component | Description |
|---|---|
server/ |
MCP server - 38 query/analysis/request tools, WebSocket collab, baselines |
extension/ |
Chrome/Firefox extension - DOM capture, annotate, 17 enrichment collectors, multi-export |
packages/playwright/ |
Playwright fixture - capture structured DOM snapshots during E2E tests |
power/ |
Kiro Power assets - 3 hooks, 9 prompts, 3 steering docs, MCP config |
How It Works
ViewGraph runs alongside your project as a standalone tool. It does not embed into your codebase or require changes to your application. It works with any web app regardless of backend technology (Python, Ruby, Java, Go, PHP, etc.).
Your app (any language) --> serves HTML --> Browser renders it --> Extension captures DOM
|
v
Kiro / Claude / Cursor <-- MCP protocol <-- ViewGraph server <-- .viewgraph.json files
The extension captures the DOM from Chrome or Firefox. The server reads those capture files and exposes them to your AI agent via MCP. Your agent then uses this context to modify your source code - it never injects into or manipulates the running application directly.
Getting Started
Prerequisites: Node.js 22+, npm 9+, Chrome 116+ or Firefox 109+
# 1. Install the browser extension from Chrome Web Store or Firefox Add-ons (links above)
# 2. Add to your AI agent's MCP config (~/.kiro/settings/mcp.json):
{
"mcpServers": {
"viewgraph": { "command": "npx", "args": ["-y", "@viewgraph/core"] }
}
}
# 3. Capture: click the ViewGraph toolbar icon on any page
# 4. Ask your agent: "Fix the annotations from my last review"
The server runs automatically via npx - no install needed. It auto-creates .viewgraph/captures/ and learns your URL pattern from the first capture.
GitHub Releases = latest version, always. Chrome/Firefox store reviews can delay updates by days or weeks. GitHub Releases always has the newest extension ZIPs and changelog. For the bleeding edge, get it from GitHub.
Alternative: npm install -g @viewgraph/core for explicit version pinning, then run viewgraph-init from each project folder to configure URL patterns and capture routing.
The extension sidebar opens with Review (annotate and comment) and Inspect (network errors, console issues) tabs. Export via Send to Agent (MCP), Copy Markdown (Jira/GitHub), or Download Report (ZIP).
For detailed setup with screenshots, browser-specific instructions, and multi-project configuration, see the Quick Start Guide.
Try the demo: Open docs/demo/index.html - a login page with 8 planted bugs. Annotate, send to Kiro, watch them get fixed. Walkthrough.
Workflows
ViewGraph supports three broad workstreams. For the full list of 23 problems it solves, see Why ViewGraph?.
For developers with AI agents
- Open your app in the browser, click the ViewGraph icon
- Click elements or shift+drag regions, add comments describing what to fix
- Check the Inspect tab for network errors or console issues
- Click Send to Agent - annotations bundle with the full DOM capture + enrichment data
- Ask your agent to fix the issues - it has full DOM context
For testers and reviewers (no AI agent needed)
The extension works standalone. No MCP server required.
- Open the app in the browser, click the ViewGraph icon
- Click or shift+drag to select problem areas, add comments
- Export:
- Copy Markdown - paste into Jira/Linear/GitHub (includes network failures, console errors, viewport breakpoint)
- Download Report - ZIP with markdown, screenshots, network.json, console.json
For teams
A tester annotates and exports to markdown. A developer annotates and sends to Kiro. A reviewer compares captures against baselines. Same tool, same workflow, same format - the only difference is where the output goes. See Why ViewGraph? for the full list of review, release, and platform workflows.
For test automation teams
Capture structured DOM snapshots during Playwright E2E tests, or generate tests from browser captures:
- Generate tests from captures: Capture a page with the extension, ask your agent
@vg-tests- it generates a complete Playwright test file with correct locators for every interactive element. 20-30 minutes of manual inspection reduced to one prompt. - Capture during tests: Add
await viewgraph.capture('checkout-page')to existing tests. The agent can then diff captures between runs, audit accessibility, and detect structural regressions. - Annotate from tests:
await viewgraph.annotate('#email', 'Missing aria-label')flags issues for the agent to fix with full DOM context.
See @viewgraph/playwright for setup, API, and examples.
Capture Accuracy
ViewGraph's capture accuracy is measured automatically against 150 diverse real-world websites using a bulk capture experiment. The experiment runs ViewGraph's DOM traverser via Puppeteer, then compares the output against live DOM ground truth across 7 dimensions.
Latest results (Set A - Breadth, 48 sites across 12 categories, 4 rendering types, 6 writing systems):
| Dimension | Median | What it measures |
|---|---|---|
| Composite | 92.1% | Weighted combination of all dimensions |
| Selector accuracy | 99.7% | VG's CSS selectors resolve to real DOM elements |
| Testid recall | 100.0% | All data-testid elements captured |
| Interactive recall | 97.9% | Buttons, links, inputs captured |
| Bbox accuracy | 100.0% | Bounding boxes preserved through serialization |
| Semantic recall | 88.2% | Landmark elements (nav, main, header) captured |
| Text match | 53.1% | visibleText matches element text (see note) |
Full methodology, per-site breakdowns, and run history: scripts/experiments/bulk-capture/
Documentation
- User Guide - getting started, tutorials, feature guides
- Quick Start - zero to first fix in 5 minutes
- Why ViewGraph? - 23 problems it solves
- Who Benefits? - developers, testers, PMs, career switchers
- Multi-Project Setup - URL patterns, routing
- @viewgraph/playwright - Playwright fixture on npm
- Roadmap - milestone plan and completion status
- Security Assessment - threat model, HMAC auth, 4 security reviews
- Spec Index - Kiro specs, ADRs, architecture docs
- ViewGraph v2 Format Spec - capture format (v2.1.0)
- Format Research - format analysis and design rationale
- Competitive Analysis - browser MCP comparison
- Product Analysis - user journeys, pain points, competitor matrix
Acknowledgments
ViewGraph's capture format was inspired by Element to LLM (E2LLM) by insitu.im - the first browser extension to frame DOM capture as a structured perception layer for AI agents. The core insight - that agents need a purpose-built intermediate representation, not raw HTML - came from E2LLM. ViewGraph extended these foundations through deep format research that produced 20 improvement proposals across token efficiency, accessibility, enrichment, and bidirectional MCP integration. Full comparison.
ViewGraph's security assessment was conducted using the AWS Labs Threat Modeling MCP Server by Aidin Ferdowsi (AWS). The tool's structured STRIDE analysis and Threat Composer integration produced the 9-threat, 9-mitigation model that drove ViewGraph's HMAC auth implementation, prompt injection defenses, and four rounds of security reviews. Full threat model.
License
AGPL-3.0 - see COPYING for the full license text.
Copyright (c) 2026 Sourjya S. Sen. See ADR-009 for licensing rationale.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi