GhostDesk
Give any AI agent a full desktop — it sees the screen, clicks, types, and runs apps like a human. Automate anything with a UI: browsers, legacy software, internal tools. No API needed. One Docker command.
GhostDesk
Give your AI agent eyes, hands, and a full Linux desktop.
An MCP server that lets LLM agents see the screen, move the mouse, type on the keyboard, launch apps, and run shell commands — all inside a sandboxed virtual desktop.
If a human can do it on a desktop, your agent can too.
Why GhostDesk?
Most AI agents are trapped in text. They can call APIs and generate code, but they can't use software. GhostDesk changes that.
Connect any MCP-compatible LLM (Claude, GPT, Gemini...) and it gets a full Linux desktop with 11 tools to interact with any application — browsers, IDEs, office suites, terminals, legacy software, internal tools. No API needed. No integration required. If it has a UI, your agent can use it.
What can your agent do with a full desktop?
Your agent gets its own Linux desktop. Here's what that unlocks:
Agentic workflows — chain anything
"Go to the CRM, export last month's leads as CSV,
open LibreOffice Calc, build a pivot table,
take a screenshot of the chart, and email it to the team."
Your agent opens the browser, logs in, downloads the file, switches to another app, processes the data, captures the result, and sends it — autonomously, across multiple applications, in one conversation.
Browse the web like a human
"Search for competitors on Google, open the first 5 results,
extract pricing from each page, and summarize in a spreadsheet."
No Selenium. No CSS selectors. No Puppeteer scripts that break every week. The agent looks at the screen, clicks what it sees, fills forms naturally — with human-like mouse movement that bypasses bot detection.
Operate any software — no API required
"Open the legacy inventory app, search for product #4521,
update the stock count to 150, and confirm the change."
That old Java app with no API? That internal admin panel from 2010? A Windows app running in Wine? If it renders pixels on screen, your agent can operate it.
Data extraction at scale
"Open the analytics dashboard, read the KPI table,
scroll down to the revenue chart, take a screenshot,
then export the raw data."
The agent takes screenshots, reads the screen visually, and extracts what it needs — works on any application, any UI framework, any language.
QA & UI testing with evidence
"Navigate the signup flow, try invalid emails, empty fields,
and SQL injection in every input. Screenshot each error state."
Your agent becomes a QA engineer — it clicks every button, fills every form, tests every edge case, and brings back screenshots as proof.
Unattended automation — runs 24/7
"Every morning: log into the supplier portal, download
the latest price list, compare with yesterday's, and
flag any changes above 5%."
Runs headless in Docker. No physical screen. No human babysitting. Schedule your agent to handle repetitive desktop tasks while you sleep.
Multi-app orchestration
"Open VS Code, create a new Python file, write a script
that calls our API, run it in the terminal, debug if it fails,
then commit and push to GitHub."
Your agent isn't limited to one app. It can switch between browser, terminal, IDE, file manager, email client — just like a human switching windows on their desktop.
Key features
| Feature | Why it matters | |
|---|---|---|
| 📸 | Screenshots | Full or regional captures with cursor overlay — the agent sees exactly what a human would see |
| 🖱️ | Human-like input | Bézier mouse curves, variable typing speed, micro-jitter — bypasses bot detection |
| 📋 | Clipboard | Read & write the clipboard — paste long text instantly |
| ⌨️ | Keyboard control | Type text, press hotkeys, keyboard shortcuts — full keyboard access |
| 🖥️ | Shell access | Run any command, launch any app, capture stdout/stderr |
| 🐳 | Sandboxed | Runs in Docker — isolated, reproducible, safe |
| 👀 | Live view | Watch your agent work in real-time via VNC or browser (noVNC) |
11 tools at your agent's fingertips
See the screen
| Tool | Description |
|---|---|
screenshot() |
Capture the screen (full or region) with cursor position overlay |
Mouse & keyboard
| Tool | Description |
|---|---|
mouse_click(x, y) |
Click at coordinates |
mouse_double_click() |
Double-click at coordinates |
mouse_drag() |
Drag from one position to another |
mouse_scroll() |
Scroll in any direction (up/down/left/right) |
type_text() |
Type with realistic per-character delays |
press_key() |
Press keys or combos (ctrl+c, alt+F4, Return...) |
System
| Tool | Description |
|---|---|
exec() |
Run shell commands with stdout/stderr capture |
launch() |
Start GUI applications |
get_clipboard() |
Read clipboard contents |
set_clipboard() |
Write to clipboard |
Quick start
1. Run the container
docker run -d --name ghostdesk \
-p 3000:3000 \
-p 5900:5900 \
-p 6080:6080 \
ghcr.io/yv17labs/ghostdesk:latest
That's it. The virtual desktop, MCP server, and VNC are all running inside an isolated container. Your agent gets a full Linux desktop — your host machine stays untouched.
2. Connect your AI
GhostDesk works with any MCP-compatible client. Add it to your config:
Claude Desktop / Claude Code
{
"mcpServers": {
"ghostdesk": {
"type": "http",
"url": "http://localhost:3000/mcp"
}
}
}
ChatGPT, Gemini, or any LLM with MCP support — same config, just point to http://localhost:3000/mcp.
Local models (Ollama, LM Studio, etc.) — any MCP client library can connect to the same endpoint.
3. Watch your agent work
Open http://localhost:6080/vnc.html in your browser to see the virtual desktop in real time.
| Service | URL |
|---|---|
| MCP server | http://localhost:3000/mcp |
| noVNC (browser) | http://localhost:6080/vnc.html |
| VNC | vnc://localhost:5900 (password: changeme) |
Demos
See GhostDesk in action:
| Demo | Description |
|---|---|
| Google Sheets Automation | AI agent autonomously populates a spreadsheet with AI startup funding data, formats headers, and creates a 3D bar chart |
| Amazon Scraper to Google Sheets | AI agent scrapes Amazon laptops, extracts product data, populates Google Sheets, and visualizes with charts |
| Flight Search & Comparison | AI agent searches Google Flights for Paris CDG → New York JFK, compares prices, and builds a chart in LibreOffice Calc |
| Wikipedia Research | AI agent browsing Wikipedia, reading articles, and extracting information |
How it works
GhostDesk runs a virtual Linux desktop inside Docker and exposes it as an MCP server. Your LLM agent connects and gets screenshot-first interaction:
- Screenshot — the agent takes a screenshot to see what's on screen, just like a human looking at their monitor
- Act — based on what it sees, the agent clicks, types, scrolls, or runs commands using human-like input simulation (Bézier mouse curves, variable typing delays, micro-jitter)
- Verify — the agent takes another screenshot to confirm the result
This approach works with any application — web apps, native apps, legacy software. The LLM sees and understands the interface visually, without needing accessibility APIs. If it renders pixels, the agent can use it.
Model Requirements
GhostDesk requires an LLM with these core capabilities:
- Vision — understand and analyze screenshots
- Tool use — call MCP tools to interact with the desktop
- Reasoning — plan multi-step tasks and adapt to changing screen states
Any LLM with strong vision capabilities and reasoning (e.g., Claude, GPT-4, Gemini, Qwen 3.5) will work. Proprietary or open-source models both function equally well as long as they support tool use and can reason through complex workflows.
Configuration
| Variable | Default | Description |
|---|---|---|
SCREEN_WIDTH |
1280 |
Virtual screen width |
SCREEN_HEIGHT |
800 |
Virtual screen height |
SCREEN_DEPTH |
24 |
Color depth |
VNC_PASSWORD |
changeme |
VNC access password |
PORT |
3000 |
MCP server port |
Tests
uv run pytest --cov
License
AGPL-3.0 with Commons Clause — see LICENSE.
Commercial use (resale, paid SaaS, etc.) requires written permission from the project owner.
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found