frpdeck
Health Warn
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 8 GitHub stars
Code Pass
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
- Permissions — No dangerous permissions requested
This is a lightweight Python CLI and MCP server designed to manage Fast Reverse Proxy (FRP) instances. It allows users and LLMs to automate structured proxy maintenance, configuration validation, and systemd service management on a single host.
Security Assessment
Overall risk is rated as Low. The tool inherently manages local system operations, meaning it generates configuration files, syncs runtime directories, and executes shell commands (like `frpc reload` and systemd service restarts). However, this is its intended and documented purpose. A code scan of 12 files found no dangerous code patterns, no hardcoded secrets, and no requests for dangerous system permissions. Because it includes an MCP server, you should still limit its execution to controlled environments to prevent unauthorized system changes via an LLM.
Quality Assessment
The project demonstrates strong maintenance habits with recent updates pushed today and clear, structured documentation. It uses the permissive Apache-2.0 license, making it safe for integration into most projects. The main drawback is its low visibility; having only 8 GitHub stars means it has not been extensively battle-tested by a large community. However, the clean codebase and transparent design help mitigate concerns typically associated with new tools.
Verdict
Use with caution—while the code itself is clean and safe, the low community adoption means you should test the tool in a non-production environment before relying on it for critical infrastructure.
Lightweight Python CLI for managing FRP instances, with local stdio MCP support for LLM-assisted proxy maintenance.
frpdeck
frpdeck is a lightweight Python 3.11+ CLI for managing FRP instances from structured source files. It focuses on practical single-host operations: initialize instance directories, validate configuration, render generated artifacts, sync managed runtime files, apply changes locally, inspect state, and maintain structured proxy definitions without introducing a larger control plane.
It is also MCP-friendly. frpdeck includes a local stdio MCP thin wrapper so an LLM can assist with structured proxy maintenance against one bound instance directory at a time.
Highlights
- Lightweight FRP deployment and maintenance workflows for client and server instances.
- Structured proxy management backed by
proxies.yaml, with import, typed add, update, remove, and preview support. - Stable JSON outputs for automation and scripting.
- Append-only audit logging and revision snapshots for write operations.
- Local stdio MCP support for LLM-assisted proxy maintenance.
Installation
Install from PyPI:
pip install frpdeck
frpdeck --help
Install from source when you want a local checkout:
python3.11 -m venv .venv
. .venv/bin/activate
python -m pip install .
For development:
python3.11 -m venv .venv
. .venv/bin/activate
python -m pip install -e '.[dev]'
Documentation
Key design notes now live under docs/:
docs/architecture.md: layer boundaries and dependency directiondocs/configuration.md: instance config shape, defaults, path resolution, and logging semanticsdocs/development.md: local development, tests, packaging, and MCP testingdocs/release.md: version bump, build, and tag/publish checklistCHANGELOG.md: release history
Features
initcreates a new client or server instance directory.validatechecks source config only: schema, placeholder values, token sources, path resolution, and simple proxy conflicts.rendergenerates FRP TOML, proxy includes, and systemd units underrendered/only.syncmirrors managed files fromrendered/intoruntime/configonly.reloadcallsfrpc reload -c ...for client instances using the currentruntime/config.applyvalidates, renders, syncs runtime files, installs binaries if needed, installs the systemd unit, and restarts the service.restartandstatusoperate on the configured systemd service.check-updateandupgradesupport GitHub latest releases and offline archives.doctorchecks Linux/systemd availability, instance files, and basic write permissions.python -m frpdeck.mcp.serverstarts a local stdio MCP server that exposes proxy-management tools and read-only status resources.
Current scope
frpdeck is a focused operations tool, not a full FRP control platform. It currently centers on structured instance management, proxy maintenance, local apply workflows, auditing, and MCP-assisted maintenance. HTTP control planes, remote auth layers, and web dashboards are intentionally out of scope for now.
Non-goals
- Remote HTTP transport for MCP
- Authentication or authorization for remote MCP access
- Web dashboard or visualization service
- Remote centralized control
- Interactive TOML editing
Quick start
Running frpdeck with no arguments now shows the built-in command help, including common entry points such as init, apply, proxy, status, and python -m frpdeck.mcp.server.
Initialize a client instance:
frpdeck init client my-client
The generated client scaffold includes a sample HTTP proxy in proxies.yaml so the route fields are visible in the initial config shape.
Edit the generated configuration and secret material:
${EDITOR:-vi} ./my-client/node.yaml
${EDITOR:-vi} ./my-client/proxies.yaml
mkdir -p ./my-client/secrets
printf 'replace-me\n' > ./my-client/secrets/token.txt
instance_name is the logical identity stored in node.yaml. It may differ from the directory name; status, service naming defaults, and audit data use instance_name, not instance_dir.name.
Validate the source configuration:
frpdeck validate --instance ./my-client
Render generated files:
frpdeck render --instance ./my-client
Mirror the rendered snapshot into runtime config without restarting anything:
sudo frpdeck sync --instance ./my-client
Apply an instance to the configured runtime paths:
sudo frpdeck apply --instance ./my-client
For offline install or replacement from a local FRP archive:
sudo frpdeck apply --instance ./my-client --archive /path/to/frp_0.65.0_linux_amd64.tar.gz
Inspect runtime state:
frpdeck status --instance ./my-client
Apply emits stage-by-stage progress in text mode so it is clear when validation, rendering, binary download/install, runtime sync, systemd install, and restart are happening.
Command semantics
validatereadsnode.yamlandproxies.yaml, validates them, and exits. It does not writerendered/orruntime/config.renderwrites the full generated snapshot intorendered/. It does not touchruntime/config, reload FRP, or restart systemd.syncmirrors the managed rendered snapshot intoruntime/config. It does not run validation, rendering, reload, or restart logic.reloadasksfrpcto reload using the currentruntime/config. If runtime config is missing, runsyncorapplyfirst.applyis the full operational path: validate, render, sync, install/upgrade the managed binary if needed, install the systemd unit, and restart the service.proxy previewis a temporary client-side preview of proxy include output. It does not modifyrendered/. Top-levelrenderwrites the full instance snapshot intorendered/.
Uninstall installed artifacts while keeping source configuration:
frpdeck uninstall --instance ./my-client
Delete the instance directory as well:
frpdeck uninstall --instance ./my-client --purge
Typical workflows
Client instance
- Run
frpdeck init client your-client. - Replace
PLEASE_FILL_SERVER_ADDRand domain placeholders innode.yamlandproxies.yaml. - Create
secrets/token.txtwith the real token. - Run
frpdeck validate --instance ./your-client. - Run
frpdeck render --instance ./your-client. - Run
sudo frpdeck apply --instance ./your-client. - Run
frpdeck status --instance ./your-client.
For offline binary management, apply --archive, upgrade --archive, and binary.local_archive are all supported.
Server instance
- Run
frpdeck init server your-server. - Create
secrets/token.txt. - If you want FRP vhost routing, explicitly set
server.vhost_http_portand/orserver.vhost_https_portinnode.yaml. - If you want subdomain-based routing, also set
server.subdomain_host. - Run
frpdeck validate --instance ./your-server. - Run
frpdeck render --instance ./your-server. - Run
sudo frpdeck apply --instance ./your-server.
Server vhost modes
By default, a new server instance does not set server.vhost_http_port, server.vhost_https_port, or server.subdomain_host.
- With the default scaffold, rendered
frps.tomldoes not bind80or443and does not enable subdomain host handling. - When you explicitly set
server.vhost_http_portorserver.vhost_https_port,frpdeckrenders those values intofrps.toml. - When you explicitly set
server.subdomain_host,frpdeckrenderssubDomainHost.
Example server config with vhost enabled:
server:
bind_addr: 0.0.0.0
bind_port: 7000
vhost_http_port: 80
vhost_https_port: 443
subdomain_host: frp.example.com
log:
to: runtime/logs/frps.log
level: info
max_days: 7
disable_print_color: true
auth:
method: token
token_file: secrets/token.txt
HTTP/HTTPS proxies
Client proxy definitions for http and https stay in proxies.yaml with the existing snake_case source config style.
HTTP with custom_domains:
proxies:
- name: app_http
type: http
local_ip: 127.0.0.1
local_port: 8080
custom_domains:
- app.example.com
HTTPS with custom_domains:
proxies:
- name: app_https
type: https
local_ip: 127.0.0.1
local_port: 8443
custom_domains:
- secure.example.com
HTTP with subdomain:
proxies:
- name: app_subdomain
type: http
local_ip: 127.0.0.1
local_port: 8080
subdomain: app
custom_domains and subdomain may be set together. That is supported by the implementation, although in practice it is usually clearer to choose the one that matches the deployment pattern.
http and https proxies must define at least one of:
custom_domainssubdomain
Blank strings are rejected for custom_domains, subdomain, and server.subdomain_host.
Proxy CLI shortcuts
Import one proxy mapping from a YAML file:
frpdeck proxy import ./app-http.yaml --instance ./my-client
Update one existing proxy from a YAML patch file:
frpdeck proxy update ssh ./ssh-patch.yaml --instance ./my-client
Add an HTTP proxy with one or more custom domains:
frpdeck proxy add http \
--instance ./my-client \
--name app-http \
--local-port 8080 \
--custom-domain app.example.com \
--custom-domain www.example.com
Add an HTTPS proxy:
frpdeck proxy add https \
--instance ./my-client \
--name app-https \
--local-port 8443 \
--custom-domain secure.example.com
Add an HTTP proxy using a subdomain:
frpdeck proxy add http \
--instance ./my-client \
--name app-subdomain \
--local-port 8080 \
--subdomain app
--custom-domain is repeatable, and it can be combined with --subdomain when you want both selectors on the same proxy.
MCP
frpdeck ships with a local stdio MCP thin wrapper over structured proxy CRUD, import, and preview tools plus read-only status resources. It is designed to bind to one instance directory at a time and is best used through a generated wrapper script.
Recommended workflow: generate a bound wrapper script with frpdeck mcp install-stdio-wrapper and point your MCP client at that script. Prefer the generated wrapper over writing your own unless you have a specific reason to customize startup behavior. The wrapper binds to your chosen instance directory and, by default, embeds the Python interpreter running frpdeck when the script is created. Use --python /path/to/python if you need to override that explicitly.
In practice, wrapper scripts are most commonly generated for client instances, because proxy configuration is usually managed on the client side. That is a usage pattern rather than a hard restriction: the MCP wrapper is tied to an instance directory, not to a separate client-only mode in the documentation.
Recommended MCP setup
On the FRP machine, change into your instance directory and generate the wrapper:
cd /path/to/your-instance
frpdeck mcp install-stdio-wrapper
This is equivalent to:
frpdeck mcp install-stdio-wrapper --instance /path/to/your-instance
The command writes /path/to/your-instance/start-mcp-stdio.sh, binds that script to the resolved absolute instance path, and embeds the Python interpreter that is running frpdeck at generation time. Replace the example path with your own instance directory.
If you need to start the server manually without the wrapper, you can still use:
python -m frpdeck.mcp.server
For a bound one-instance server, the direct form is:
python -m frpdeck.mcp.server --instance-dir /path/to/your-instance
Before configuring Claude Code, manually verify the SSH command from the Claude Code machine. Replace the host name and path with your own SSH destination and instance directory:
ssh your-ssh-host /path/to/your-instance/start-mcp-stdio.sh
That command should normally stay attached and wait for stdin/stdout traffic because the MCP stdio server is waiting for client messages. If it exits immediately or prints an error, fix the remote Python environment, instance path, or SSH setup first.
Once the manual SSH command works, add the MCP entry in Claude Code:
claude mcp add --scope user --transport stdio frpdeck -- \
ssh your-ssh-host /path/to/your-instance/start-mcp-stdio.sh
Current MCP scope is intentionally small:
- Local stdio MCP server only.
- Structured proxy CRUD/import/preview only; instance-level
validate/sync/applystay in the CLI. - No HTTP transport.
- No remote auth layer.
- No web UI.
Audit and safety notes
Write operations append audit records under state/audit/audit.jsonl, and proxy mutations also create revision snapshots under state/revisions/. This is intended to make changes traceable and manually recoverable without turning the tool into a full control plane.
SSH and BatchMode
BatchMode yes is useful for unattended or scripted SSH sessions because it disables interactive password prompts and host-key confirmation. Do not treat it as the first step.
Recommended order:
- Manually run the SSH wrapper command until it works without prompts.
- Confirm that host keys are trusted and key-based auth is already working.
- Only then consider enabling
BatchMode yesin~/.ssh/config.
Example SSH config shape:
Host your-frp-host
HostName <host-or-ip>
User <user>
IdentityFile ~/.ssh/id_ed25519
# Add BatchMode yes only after manual SSH testing succeeds
# BatchMode yes
Test fixtures
Repository fixtures now live under tests/fixtures/instances/. They exist for tests and development reference only. Daily usage should start from frpdeck init ..., not by editing fixture directories directly.
Notes on paths
- Relative paths in YAML are resolved against the instance directory, not the shell working directory.
- Rendered systemd units always use absolute runtime paths.
- By default, runtime files are installed under
runtime/inside the instance directory, while the systemd unit is written to/etc/systemd/system. - FRP's own logs are controlled by
client.logorserver.logand are written into generated frpc/frps config. frpdeck's own logs are configured by top-levelfrpdeck_logginginsidenode.yaml.- Source configuration remains YAML.
node.yamlis always present, whileproxies.yamlis used for client proxy definitions and may be absent on server instances. There is no separate runtime config file for frpdeck in the current design.
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found