mirroir-mcp
MCP server for controlling a real iPhone via macOS iPhone Mirroring...and any MacOs app. Screenshot, tap, swipe, type — from any MCP client.
mirroir-mcp
Give your AI eyes, hands, and a real iPhone. An MCP server that lets any AI agent see the screen, tap what it needs, and figure the rest out — through macOS iPhone Mirroring. Experimental support for macOS windows. 32 tools, any MCP client.
Requirements
- macOS 15+
- iPhone connected via iPhone Mirroring
Install
/bin/bash -c "$(curl -fsSL https://mirroir.dev/get-mirroir.sh)"
or via npx:
npx -y mirroir-mcp install
or via Homebrew:
brew tap jfarcand/tap && brew install mirroir-mcp
The first time you take a screenshot, macOS will prompt for Screen Recording and Accessibility permissions. Grant both.
Per-client setupClaude Code
claude mcp add --transport stdio mirroir -- npx -y mirroir-mcp
GitHub Copilot (VS Code)
Install from the MCP server gallery: search @mcp mirroir in the Extensions view, or add to .vscode/mcp.json:
{
"servers": {
"mirroir": {
"type": "stdio",
"command": "npx",
"args": ["-y", "mirroir-mcp"]
}
}
}
Cursor
Add to .cursor/mcp.json in your project root:
{
"mcpServers": {
"mirroir": {
"command": "npx",
"args": ["-y", "mirroir-mcp"]
}
}
}
OpenAI Codex
codex mcp add mirroir -- npx -y mirroir-mcp
Or add to ~/.codex/config.toml:
[mcp_servers.mirroir]
command = "npx"
args = ["-y", "mirroir-mcp"]
Install from source
git clone https://github.com/jfarcand/mirroir-mcp.git
cd mirroir-mcp
./mirroir.sh
Use the full path to the binary in your .mcp.json: <repo>/.build/release/mirroir-mcp.
How it works
Every interaction follows the same loop: observe, reason, act. describe_screen gives the AI every text element with tap coordinates (eyes). The LLM decides what to do next (brain). tap, type_text, swipe execute the action (hands) — then it loops back to observe. No scripts, no coordinates, just intent.
Examples
Paste any of these into Claude Code, Claude Desktop, ChatGPT, Cursor, or any MCP client:
Open Messages, find my conversation with Alice, and send "running 10 min late".
Open Calendar, create a new event called "Dentist" next Tuesday at 2pm.
Open my Expo Go app, tap "LoginDemo", test the login screen with
[email protected] / password123. Screenshot after each step.
Start recording, open Settings, scroll to General > About, stop recording.
Screen Intelligence
describe_screen is the AI's eyes. Three backends work together to give the agent a complete picture of what's on screen — text, icons, and semantic UI structure.
Apple Vision OCR (default)
The default backend uses Apple's Vision framework to detect every text element on screen and return exact tap coordinates. This is fast, local, and requires no API keys or external services.
Icon Detection (YOLO CoreML)
Text-only OCR misses non-text UI elements — buttons, toggles, tab bar icons, activity rings. Drop a YOLO CoreML model (.mlmodelc) in ~/.mirroir-mcp/models/ and the server auto-detects it at startup, merging icon detection results with OCR text. The AI gets tap targets for elements that text-only OCR cannot see.
| Mode | ocrBackend setting |
Behavior |
|---|---|---|
| Auto-detect (default) | "auto" |
Uses Vision + YOLO if a model is installed, Vision only otherwise |
| Vision only | "vision" |
Apple Vision OCR text only |
| YOLO only | "yolo" |
CoreML element detection only |
| Both | "both" |
Always merge both backends (falls back to Vision if no model) |
AI Vision Mode (embacle)
Instead of local OCR, describe_screen can send the screenshot to an AI vision model that identifies UI elements semantically — cards, tabs, buttons, icons, navigation structure — not just raw text. This produces richer context for the agent, especially on screens with complex layouts.
The embacle runtime is embedded directly into the mirroir-mcp binary via Rust FFI. describe_screen calls the embedded runtime in-process — no separate server, no network round-trip, no additional setup. The FFI layer (EmbacleFFI.swift → libembacle.a) handles initialization, chat completion requests, and memory management across the Swift/Rust boundary.
embacle routes vision requests through already-authenticated CLI tools (GitHub Copilot, Claude Code) so there is no separate API key to manage. If you have a Copilot or Claude Code subscription, you already have access.
Install
brew tap dravr-ai/tap
brew install embacle # CLI tools (embacle-server, embacle-mcp)
brew install embacle-ffi # Rust FFI static library (libembacle.a)
Then rebuild mirroir-mcp from source (or reinstall via Homebrew) so the binary links against libembacle.a:
# From source
swift build -c release
# Or via Homebrew (rebuilds automatically)
brew reinstall mirroir-mcp
Zero-config activation
When the embacle FFI is linked into the binary, screenDescriberMode defaults to "auto" which automatically resolves to vision mode. No settings change required — install embacle-ffi, rebuild, and describe_screen starts using AI vision.
To force local OCR even when embacle is available, explicitly set "ocr":
// .mirroir-mcp/settings.json
{
"screenDescriberMode": "ocr"
}
See Configuration for all available settings.
Skills
When you find yourself repeating the same agent workflow, capture it as a skill. Skills are SKILL.md files — numbered steps the AI follows, adapting to layout changes and unexpected dialogs. Steps like Tap "Email" use OCR — no hardcoded coordinates.
Place files in ~/.mirroir-mcp/skills/ (global) or <cwd>/.mirroir-mcp/skills/ (project-local).
---
version: 1
name: Commute ETA Notification
app: Waze, Messages
tags: ["workflow", "cross-app"]
---
## Steps
1. Launch **Waze**
2. Wait for "Où va-t-on ?" to appear
3. Tap "Où va-t-on ?"
4. Wait for "${DESTINATION:-Travail}" to appear
5. Tap "${DESTINATION:-Travail}"
6. Wait for "Y aller" to appear
7. Tap "Y aller"
8. Wait for "min" to appear
9. Remember: Read the commute time and ETA.
10. Press Home
11. Launch **Messages**
12. Tap "New Message"
13. Type "${RECIPIENT}" and select the contact
14. Type "On my way! ETA {eta}"
15. Press **Return**
16. Screenshot: "message_sent"
${VAR} placeholders resolve from environment variables. ${VAR:-default} for fallbacks.
Skill Marketplace
Install ready-to-use skills from jfarcand/mirroir-skills:
git clone https://github.com/jfarcand/mirroir-skills ~/.mirroir-mcp/skills
From Exploration to CI
The generate_skill tool lets an AI agent explore an app and produce SKILL.md files. It uses breadth-first search (BFS) to traverse the app as a navigation graph — screens are nodes, tappable elements are edges. The explorer describes each screen, matches elements against component definitions to decide what to tap, visits child screens, and backtracks via the back chevron. Duplicate screens are skipped via structural fingerprinting. See Component Detection below for how the explorer interprets raw elements into structured UI components.
The explorer works viewport-by-viewport: after calibrating the page length, it builds a plan from the current viewport, taps elements top-to-bottom, scrolls down to reveal more content, and rebuilds the plan for each new viewport. This approach works with both OCR and AI vision describers. Pass seed for deterministic ordering across runs.
Exploration is bounded — it does not discover every reachable screen in large apps. Depth, screen count, and time limits keep runs practical. For targeted flows, provide a goal to focus the traversal.
graph TD
A["Launch App"] --> B["Describe Screen"]
B --> C{"Calibrated?"}
C -- No --> D["Scroll Full Page"]
D --> E{"skip_calibration?"}
E -- No --> F["Component Detect +\nClassify + Validate"]
E -- Yes --> G["Classify Elements\nDirectly"]
F --> H["Build Plan"]
G --> H
C -- Yes --> H
H --> I{"Untried\nElements?"}
I -- Yes --> J["Tap Element"]
I -- No --> K["Return to Root"]
J --> M["Describe +\nClassify Edge"]
M --> N{"Transition"}
N -- new screen --> O["Add to Frontier"]
O --> P["Backtrack"]
N -- revisited/dead --> P
P -- push: tap back --> H
P -- modal: tap close --> H
P -- tab: tap prev --> H
K --> Q{"Frontier\nEmpty?"}
Q -- No --> R["Next Frontier\nScreen"]
R --> B
Q -- Yes --> S["Generate SKILL.md"]
Generate
Two modes: autonomous exploration (BFS) and guided session (manual step-by-step).
Autonomous BFS exploration — the agent explores on its own:
Explore the Settings app and generate a skill that checks the iOS version.
This calls generate_skill(action: "explore", app_name: "Settings", goal: "check iOS version") under the hood. The explorer launches the app, runs BFS from the root screen, and outputs a SKILL.md for the discovered path.
| Parameter | Default | Description |
|---|---|---|
app_name |
required | App to explore |
goal |
none | Focus exploration toward a specific flow (e.g. "check software version") |
goals |
none | Array of goals — one SKILL.md per goal |
max_depth |
6 | Maximum BFS depth |
max_screens |
30 | Maximum screens to visit |
max_time |
300 | Maximum seconds before stopping |
strategy |
auto | "mobile" (default), "social" (Reddit, Instagram), or "desktop" (macOS windows) |
skip_calibration |
false | Skip component detection during calibration. Scrolling still runs. Useful with AI vision describers that produce clean semantic elements |
seed |
random | Integer seed for deterministic exploration ordering. Same seed produces identical tap sequences |
fresh |
true | Discard persisted navigation graph and explore from scratch. Set false for incremental exploration |
Guided session — the AI navigates manually, capturing each screen:
generate_skill(action: "start", app_name: "MyApp")— launch app, OCR first screen- Use
tap/swipe/type_textto navigate, thengenerate_skill(action: "capture")to record each screen generate_skill(action: "finish")— assemble captured screens into a SKILL.md
Test
Run skills deterministically from the CLI — no AI in the loop:
mirroir test apps/settings/check-about
mirroir test --junit results.xml --verbose # JUnit output
mirroir test --dry-run apps/settings/check-about # validate without executing
| Option | Description |
|---|---|
--junit <path> |
Write JUnit XML report |
--screenshot-dir <dir> |
Save failure screenshots (default: ./mirroir-test-results/) |
--timeout <seconds> |
wait_for timeout (default: 15) |
--verbose |
Step-by-step detail |
--dry-run |
Parse and validate without executing |
--no-compiled |
Skip compiled skills, force full OCR |
Exit code 0 = all pass, 1 = any failure.
Compiled Skills
Compile a skill once to capture coordinates and timing. Replay with zero OCR — a 10-step skill drops from 5+ seconds of OCR to under a second.
mirroir compile apps/settings/check-about # compile
mirroir test apps/settings/check-about # auto-detects .compiled.json
mirroir test --no-compiled check-about # force full OCR
AI agents auto-compile skills as a side-effect of the first MCP run. See Compiled Skills for details.
AI-Assisted Diagnosis
When a test step fails, pass --agent to get an AI diagnosis of what went wrong and suggested fixes:
mirroir test --agent gpt-5.3 apps/settings/check-about
mirroir test --agent claude-sonnet-4-6 apps/settings/check-about
mirroir test --agent ollama:llama3 apps/settings/check-about
mirroir test --agent embacle apps/settings/check-about
Built-in agents:
| Agent | Provider | API Key |
|---|---|---|
gpt-5.3 |
OpenAI | OPENAI_API_KEY |
claude-sonnet-4-6, claude-haiku-4-5 |
Anthropic | ANTHROPIC_API_KEY |
ollama:<model> |
Ollama (local) | None |
embacle, embacle:claude |
embacle-server | CLI agent key |
Custom agents can be defined as YAML profiles in ~/.mirroir-mcp/agents/.
embacle routes requests through already-authenticated CLI tools (GitHub Copilot, Claude Code, etc.) — no separate API key needed:
brew tap dravr-ai/tap && brew install embacle
mirroir test --agent embacle my-skill
Component Detection
The explorer doesn't guess from raw OCR — it matches screen regions against component definitions. Raw OCR returns a flat list of text elements with no structure (General and > are two unrelated strings). Component definitions bridge this gap: each definition is a .md file that describes a UI pattern (table rows, toggles, tab bars, summary cards) with match rules, interaction behavior, and grouping logic.
The detection pipeline groups OCR elements into rows, evaluates each row against all loaded definitions using hard constraints (zone, element count, chevron presence) and soft scoring signals, then selects the highest-scoring match. Multi-row elements (e.g. Health app summary cards with title + subtitle + value) are absorbed into a single tappable component via the grouping rules.
Each definition specifies:
- Match Rules — zone (nav bar / content / tab bar), element count range, chevron/numeric/text patterns, minimum OCR confidence
- Interaction — whether to tap, which element to target (
first_navigation_element,centered_element, etc.), expected result (navigates,toggles,dismisses), and whether to backtrack after - Grouping — how many points below the anchor row to absorb, and under what conditions
20 iOS component definitions ship built-in. Place custom definitions in ~/.mirroir-mcp/components/ or <cwd>/.mirroir-mcp/components/. Test a definition against the current live screen with calibrate_component.
Vision Indicators
AI vision describers describe UI elements semantically ("Activité chevron") rather than character-by-character ("Activité" + ">"). A vision-indicators.md file maps these descriptions to OCR-compatible characters so the component pipeline works identically with both backends:
## Indicators
- chevron: >
- dismiss: ×
- back: <
When a vision element ends with a mapped suffix (e.g. "Entraînements chevron"), the normalizer splits it into two elements: "Entraînements" + ">". Place vision-indicators.md alongside your component definitions.
See Component Detection for the full definition format, match rule reference, and the detection pipeline.
Security
Giving an AI access to your phone demands defense in depth. mirroir-mcp is fail-closed at every layer.
- Tool permissions — Without a config file, only read-only tools (
screenshot,describe_screen) are exposed. Mutating tools are hidden from the MCP client entirely — it never sees them. - App blocking —
blockedAppsinpermissions.jsonprevents the AI from interacting with sensitive apps like Wallet or Banking, even if mutating tools are allowed. - No root required — Runs as a regular user process using the macOS CGEvent API. No daemons, no kernel extensions, no root privileges — just Accessibility permissions.
- Kill switch — Close iPhone Mirroring to kill all input instantly.
// ~/.mirroir-mcp/permissions.json
{
"allow": ["tap", "swipe", "type_text", "press_key", "launch_app"],
"deny": [],
"blockedApps": ["Wallet", "Banking"]
}
See Permissions and Security for the full threat model.
CLI Tools
Recorder
Record interactions as a skill file:
mirroir record -o login-flow.yaml -n "Login Flow" --app "MyApp"
Doctor
Verify your setup:
mirroir doctor
mirroir doctor --json # machine-readable output
Configure
Set up your keyboard layout for non-US keyboards:
mirroir configure
Updating
# curl installer
/bin/bash -c "$(curl -fsSL https://mirroir.dev/get-mirroir.sh)"
# npx
npx -y mirroir-mcp install
# Homebrew
brew upgrade mirroir-mcp
# From source
git pull && swift build -c release
Uninstall
# Homebrew
brew uninstall mirroir-mcp
# From source
./uninstall-mirroir.sh
Configuration
All settings live in settings.json — project-local (.mirroir-mcp/settings.json) or global (~/.mirroir-mcp/settings.json). Project-local settings override global ones. Every setting also has a corresponding environment variable (e.g. MIRROIR_SCREEN_DESCRIBER_MODE).
{
"screenDescriberMode": "auto",
"agent": "embacle",
"ocrBackend": "auto",
"keystrokeDelayUs": 15000,
"explorationMaxScreens": 30
}
See Configuration Reference for all 40+ settings covering screen intelligence, input timing, scroll behavior, exploration budgets, AI providers, and keyboard layouts.
Documentation
| Tools Reference | All 32 tools, parameters, and input workflows |
| Configuration | All settings: screen intelligence, input timing, exploration, AI providers |
| FAQ | Security, focus stealing, keyboard layouts, embacle/vision mode |
| Security | Threat model, kill switch, and recommendations |
| Permissions | Fail-closed permission model and config file |
| Known Limitations | Focus stealing, keyboard layout gaps, autocorrect |
| Component Detection | Component definitions, calibration, and the detection pipeline |
| YOLO Icon Detection | Recommended YOLO models, CoreML setup, and configuration |
| Compiled Skills | Zero-OCR skill replay |
| Testing | FakeMirroring, integration tests, and CI strategy |
| Troubleshooting | Debug mode and common issues |
| Contributing | How to add tools, commands, and tests |
| Skills Marketplace | Skill format, plugin discovery, and authoring |
Community
Join the Discord server to ask questions, share skills, and discuss ideas.
Contributing
Contributions welcome. By submitting a patch, you agree to the Contributor License Agreement — your Git commit metadata serves as your electronic signature.
Why "mirroir"? — It's the old French spelling of miroir (mirror). A nod to the author's roots, not a typo.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi