helmsman
Health Uyari
- No license — Repository has no license file
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 17 GitHub stars
Code Gecti
- Code scan — Scanned 4 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
This tool is an MCP server that dynamically generates context-aware instructions for AI coding agents. It adapts its guidance based on the AI model's capability tier (e.g., Opus vs. Haiku) and your local environment, replacing the need for static instruction files.
Security Assessment
Overall risk: Low. The automated code scan reviewed 4 files and found no dangerous patterns, hardcoded secrets, or requests for risky permissions. Because the tool is designed to detect your operating system, shell, and installed packages to tailor its instructions, it inherently executes local shell commands (like checking what tools are installed). However, this appears to be strictly limited to read-only system discovery rather than executing destructive actions. It requires no network access to function.
Quality Assessment
The project is in early but active development. It was updated as recently as today, indicating ongoing maintenance. However, it has a relatively low community trust footprint with 17 GitHub stars. Most notably, the repository lacks a license file. Without an explicit open-source license, the code is technically under exclusive copyright by default, meaning you do not have formal legal permission to use, modify, or distribute it.
Verdict
Use with caution — the code itself appears safe and lightweight, but the complete absence of a license makes it a legal risk for any formal or commercial project.
Adaptive instruction server for AI coding agents
Helmsman
Adaptive instruction server for AI coding agents.
The Problem
Static AGENTS.md files create instruction entropy collapse.
The same instructions for Opus, Sonnet, and Haiku is fantasy. They have different capabilities, different costs, and different failure modes. Static instructions:
- Rot silently - written once, never updated, drift from reality
- Waste tokens - Opus doesn't need step-by-step guidance
- Cause failures - Haiku needs guardrails it doesn't get
- Ignore context - "use brew" when you're on Arch with mise
The real cost isn't just tokens - it's instruction determinism. You can't control what you can't adapt.
The Solution
Helmsman serves dynamic, context-aware instructions via MCP (and CLI):
- Model-aware: Opus gets minimal guidance, Haiku gets verbose hand-holding
- Environment-aware: Detects OS, shell, available tools
- Template-based: Jinja2 templates adapt to context
{% if model.tier == "agi" %}
Verify packages exist. You know what to do.
{% else %}
1. Read the file first with Read tool
2. Check for existing patterns
3. Verify packages exist - do NOT invent them
{% endif %}
{% if env.has_mise %}
Use mise for runtime management.
{% elif env.has_brew %}
Use brew for packages.
{% endif %}
Install
cargo install helmsman
Quick Start
Create
AGENTS.md.j2in your project rootAdd to
.mcp.json:
{
"mcpServers": {
"helmsman": {
"type": "stdio",
"command": "helmsman"
}
}
}
- Call the prompt:
/helmsman:instructions claude-opus-4-5-20251101
Tiers
Three capability tiers, parallel to Anthropic's model siblings. Could expand to 4-5, but we're not going the OpenAI route of 40 model names (nano, mini, micro, medium, large...).
monkey
Follows instructions. Useful but needs guardrails. Tell it exactly what to do, what NOT to do, and keep it inside the perimeter. Without guidance, it will hallucinate packages and invent APIs.
Examples: Haiku, GPT-5.2 mini, Gemini Flash
engineer
Knows the basics. Competent but lacks judgment. Will delete your 300GB cache to fix a bug. Bug fixed, but now you wait 4 hours for it to rebuild. Needs boundaries, not hand-holding.
Examples: Sonnet, GPT-5.2 medium, Gemini Pro
agi
The architect. Don't explain how to use cargo or how to publish. It knows. Give it constraints and goals, not procedures. Wasting tokens on step-by-step instructions is burning money.
Examples: Opus, GPT-5.2 high/xhigh, DeepSeek R3
Shortcuts: a/architect (agi), e/eng/standard (engineer), m/basic/simple (monkey)
CLI
helmsman # MCP server mode
helmsman -i # print instructions (default tier)
helmsman -i m # monkey tier
helmsman -i basic # monkey tier (neutral alias)
helmsman -i a # agi tier
helmsman -i architect # agi tier (neutral alias)
helmsman -i claude-opus-4-5-20251101 # resolves to agi tier
helmsman -i gpt-4o-mini # resolves to monkey tier
# Override tier mapping for new/unknown models
helmsman -i unknown-model --tier engineer
# Show diff between tiers
helmsman -i a --diff e # show AGI vs Engineer differences
helmsman -s commit # render .skills/commit.j2
helmsman -l # list available skills
helmsman --validate # check skill syntax
helmsman -t # show token count
Template Context
{# Model #}
{{ model.id }} {# "claude-opus-4-5-20251101" #}
{{ model.tier }} {# "agi", "engineer", "monkey" #}
{# Environment #}
{{ env.os }} {# "macos", "arch", "debian", "alpine" #}
{{ env.shell }} {# "zsh", "bash", "fish", "sh" #}
{{ env.in_docker }} {# true/false #}
{{ env.in_ssh }} {# true/false #}
{# Tools #}
{{ env.has_mise }}
{{ env.has_brew }}
{{ env.has_apt }}
{{ env.has_gh }}
{{ env.has_git }}
Configuration
Create optional helmsman.toml in:
- Current directory (project-local)
~/.config/helmsman/(user global)- Set
$HELMSMAN_CONFIGenv var to override
[defaults]
tier = "engineer"
[server]
templates_dir = "~/my-templates"
Model → tier mappings are pre-configured for Anthropic, OpenAI, Google, and other major models. Unknown models default to engineer tier.
Skills
Project skills live in .skills/ and are discovered automatically. Files prefixed with _ are partials.
Environment Detection
Helmsman detects OS, shell, and available tools automatically. Best effort only, never authoritative.
Known edge cases:
- SSH + Docker may report wrong shell (
$SHELLlies) - Alpine/busybox may lack expected binaries
- Container detection uses heuristics (cgroup parsing)
Use these values for optimization hints, not hard requirements.
Non-Goals
Things Helmsman deliberately doesn't do:
- Prompt engineering framework - not here to optimize your prompts
- Model memory/learning - stateless, no persistence between calls
- Teaching tool - assumes you know what you're doing
- Configuration management - use real tools for that
Helmsman is infrastructure, not a product.
License
BSD-3-Clause
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi