locode
Health Pass
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 21 GitHub stars
Code Warn
- fs module — File system access in benchmark/report/generate.js
Permissions Pass
- Permissions — No dangerous permissions requested
This is a local-first AI coding assistant that acts as a smart CLI router. It directs simple coding tasks to a local Ollama model and escalates complex reasoning tasks to Claude, helping users save on API token usage.
Security Assessment
Overall risk: Low. The tool operates primarily as a local agent and routes network requests exclusively to Ollama and the Anthropic API. It requires an `ANTHROPIC_API_KEY` to function fully, but handles it via standard environment variables without hardcoding secrets. Telemetry is opt-in and off by default, requiring the manual setup of a Sentry environment variable. The codebase contains file system access, which is standard for a CLI tool meant to read local codebases and generate reports. No dangerous permissions are requested, and no hidden or malicious data exfiltration vectors were detected.
Quality Assessment
The project is actively maintained and uses the permissive MIT license. It is explicitly labeled as "Alpha Software" by its creator, meaning developers should expect potential instability, bugs, and sudden changes to configuration formats. Community trust is currently low but growing, represented by 21 GitHub stars. Being an early-stage project, it lacks extensive community validation or widespread production testing, so users should be prepared for a rougher edge.
Verdict
Safe to use, though treat it as experimental software due to its early alpha status.
Local-first AI coding CLI. Routes simple tasks to a local LLM (Ollama), complex tasks to Claude. Saves tokens.
Locode
Alpha Software — Use at Your Own Risk
Locode is under active development and has not been validated for production use. Interfaces, configuration formats, and behaviours may change without notice between releases. It is provided as-is, without warranty of any kind. Use in critical or production environments is not recommended at this stage.
Local-first AI coding CLI. Routes simple tasks to a local LLM (Ollama), complex tasks to Claude. Saves tokens.
Demo
⭐ If you find the idea interesting, please consider starring the repo. It helps a lot!
Quick Start
npm install -g @chocks-dev/locode
locode setup # installs Ollama, picks a model, saves API key
locode # start chatting
Architecture
User CLI
│
▼
Routing Logic
│
├── Local LLM (fast tasks)
│
└── Claude (complex reasoning)
Commands
| Command | Description |
|---|---|
locode |
Interactive REPL (default) |
locode run "<prompt>" |
Single-shot task execution |
locode setup |
First-run wizard (Ollama + model + API key) |
locode install [model] |
Pull a specific Ollama model |
locode update |
Update locode to the latest version |
locode benchmark |
Compare token cost across routing modes |
Flags
locode chat --claude-only # skip local, send everything to Claude
locode chat --local-only # skip Claude, use Ollama only
locode chat --config ./custom.yaml # use a custom config file
locode benchmark --prompt "build a REST API" --output report.html
If no ANTHROPIC_API_KEY is set, locode automatically runs in local-only mode.
Config
Edit locode.yaml for routing rules, models, and thresholds:
local_llm.model— Ollama model (default:qwen3:8b)routing.rules— regex patterns that route tasks to local or Clauderouting.escalation_threshold— confidence below this escalates to Claude
Type stats in the REPL to see token usage and estimated savings.
Telemetry (Opt-in)
Telemetry is off by default. To opt in, export in your shell profile:
export SENTRY_DSN="https://[email protected]/456"
When enabled: captures unhandled exceptions and samples 20% of performance traces.
Never sent: prompts, API keys, file contents. Unset SENTRY_DSN to disable.
Development
git clone https://github.com/chocks/locode && cd locode
npm install
npm run dev # run with ts-node
npm test # vitest
npm run build # tsc → dist/
Project Structure
src/
cli/ # REPL, setup, install, update, benchmark
config/ # Zod schema + YAML loader
agents/ # LocalAgent (Ollama) + ClaudeAgent (Anthropic SDK)
orchestrator/ # Router + Orchestrator
tools/ # readFile, shell (allow-list), git
tracker/ # Token usage + cost estimation
E2E Tests
End-to-end tests verify the full CLI pipeline by spawning locode against lightweight HTTP stub servers that mimic Ollama and Anthropic APIs. No external services required.
Prerequisites: Build the project first — E2E tests run the compiled CLI.
npm run build
npm run test:e2e
The tests verify:
- Simple prompts (e.g.,
grep) route to local LLM - Complex prompts (e.g.,
refactor) route to Claude - Missing API key triggers local-only fallback
Contributing
- Fork and branch from
main— never commit directly - TDD — write failing test first, then implement
npm test && npm run buildbefore opening a PR- One feature/fix per PR
Releasing
Releases are tag-driven — CI publishes to npm on v* tag push.
git checkout -b release/vX.Y.Z
npm run release:patch # bump package.json
git add package.json package-lock.json
git commit -S -m "chore: release vX.Y.Z"
gh pr create --fill
# after merge:
git checkout main && git pull
git tag -s "vX.Y.Z" -m "Release vX.Y.Z"
git push origin "vX.Y.Z"
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found