Grinta-Coding-Agent
Health Uyari
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 5 GitHub stars
Code Gecti
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
This is a local-first autonomous coding agent that plans, executes, validates, and completes software tasks end-to-end directly from your CLI. It is designed to be provider-agnostic, allowing you to use various LLMs without locking you into a specific vendor.
Security Assessment
As an autonomous coding agent, this tool inherently reads your local source code and executes shell commands to perform its tasks. The automated code scan of 12 files found no dangerous patterns and no hardcoded secrets. It does not request overtly dangerous permissions and operates without a cloud control plane. Because it acts autonomously on your system, you should be aware of its ability to modify files and run commands. Overall risk is rated Medium due to the elevated system access required for it to function.
Quality Assessment
The project is under very active development, with its most recent push being just today. It utilizes strong development practices, including type checking (mypy), linting (Ruff), and automated CI/CD pipelines for testing. It is properly licensed under the permissive MIT license. However, community trust and visibility are currently very low. With only 5 GitHub stars, the project has not yet been widely vetted by a large audience. Furthermore, it is currently in v1.0.0-rc1 (Release Candidate) status, meaning it is still seeking real-world feedback before its final stable release.
Verdict
Use with caution — the project is well-structured and actively maintained, but its low community adoption and inherent autonomous execution capabilities warrant careful supervision.
Local-first autonomous coding agent that plans, executes, validates, and finishes software tasks end-to-end.
Grinta
Local-first. Provider-agnostic. Ships with real LSP + DAP. ~1.4 MB wheel.
A CLI coding agent that plans, executes, validates, and finishes — without a cloud control plane, without lock-in to one model vendor, and without a 1.6 GB install.
Current status:
v1.0.0-rc1public release candidate. Intended for real-world use, with focused community feedback requested before GA.
Grinta in action
Full session recording (~2.5 min, with audio):
Direct link if the player does not load: grinta-demo.mp4.
Release status
- Latest public build is
v1.0.0-rc1(release candidate), not final GA. - We are actively collecting feedback on first-run setup, UX clarity, and edge-case reliability.
- If you hit friction, please open an issue with the
RC Feedbacktemplate from the issue chooser. - GA criteria and release gate details live in docs/RELEASE_CHECKLIST.md.
Why Grinta vs the rest
| Grinta | Aider | Claude Code | Codex CLI | |
|---|---|---|---|---|
| Install size (base wheel) | 1.4 MB | ~80 MB | ~15 MB | ~12 MB |
| Provider-agnostic (OpenAI / Anthropic / Google / Ollama / LM Studio / OpenRouter) | ✅ | ✅ | ❌ Anthropic only | ❌ OpenAI only |
| Local-first (works fully offline w/ Ollama) | ✅ auto-detected | partial | ❌ | ❌ |
| LSP integration (auto-discovers 17 servers) | ✅ | ❌ | partial | ❌ |
| DAP debugger integration | ✅ auto-discovered | ❌ | ❌ | ❌ |
| Cost / token / latency HUD | ✅ live | partial | ❌ | partial |
| Stuck-loop + cost-acceleration detection | ✅ | ❌ | partial | ❌ |
| Risk-classified actions + audit log | ✅ hardened_local |
❌ | partial | partial |
| Session checkpoint / resume / revert | ✅ event-stream | ✅ git | partial | ❌ |
| Windows-first parity (PowerShell) | ✅ | partial | partial | partial |
| MCP support | ✅ | ❌ | ✅ best in class | partial |
The pitch in one sentence: everything Aider's local-first ethos gives you, plus the depth of tooling Claude Code has, without locking you to a single model vendor.
Install in 30 seconds
pipx install grinta-ai # lean install (~few MB) — all you need for coding
grinta init # one-time wizard: pick provider + paste key
grinta # launch the REPL in the current directory
Optional extras (install only what you need):
pipx install "grinta-ai[rag]" # adds chromadb (~80MB ONNX model) for vector memory
pipx install "grinta-ai[documents]" # adds PDF / DOCX / PPTX / LaTeX parsing
pipx install "grinta-ai[browser]" # adds browser-use for web automation
pipx install "grinta-ai[all]" # everything
That is the whole setup. The grinta init wizard auto-detects local Ollama and LM Studio servers and writes a working settings file for you. Installed runs use ~/.grinta/settings.json; source checkouts use the repository settings.json; APP_ROOT can intentionally override that root. Other install paths (uv, Homebrew, Scoop, and experimental Docker image usage) are in docs/INSTALL.md.
What you get
- Task completion, not just file edits. Validation gates and stuck detection block premature "done".
- Model-agnostic. OpenAI, Anthropic, Google, OpenRouter, Ollama, LM Studio — same prompt surface, same tools.
- Local-first. Code stays in your workspace; sessions, checkpoints, and audit logs live under
~/.grinta/workspaces/<id>/storage. - Strong safety rails. Risk-classified actions, CRITICAL refusal gate, secret masking, and a session-wide audit trail.
- Durable long sessions. Event-stream ledger, automatic compaction, manual
/checkpoint, and revert. - Lean TUI. Cost / tokens / latency / breaker state visible in the HUD; rich slash commands (
/help).
Common slash commands
| Command | What it does |
|---|---|
/help |
Full slash-command reference |
/cost |
Tokens, calls, USD spent this session |
/diff |
Workspace git changes (--stat, --name-only, --patch) |
/sessions |
Recent sessions, with optional limit (/sessions list 10) |
/think |
Toggle the optional reasoning scratchpad |
/checkpoint |
Snapshot the workspace (revertable) |
/status |
Full HUD snapshot |
/compact |
Force context compaction now |
Security boundary
Grinta executes actions on the local host. hardened_local adds stricter policy checks but is not sandboxing or process isolation. Read docs/SECURITY_CHECKLIST.md before pointing Grinta at code you do not trust — for hostile codebases, run inside a VM or container.
Architecture (high level)
graph TB
User([User]) --> CLI[CLI: backend.cli.entry]
CLI --> Orch[SessionOrchestrator]
Orch --> Engine[Engine\nplanning + tool intent]
Orch --> Pipe[Operation pipeline\nsafety + validation]
Pipe --> Runtime[RuntimeExecutor\nlocal execution]
Runtime --> Obs[Observations]
Obs --> Orch
Orch --> Ledger[EventStream / durability]
Orch --> FinishGate[Task validation\nbefore finish]
See docs/ARCHITECTURE.md for the deep dive.
Contributors: CI runs the full unit corpus on Linux and Windows (docs/CI.md); match that locally before opening a PR (CONTRIBUTING.md).
The story behind Grinta
Grinta is a single-author project, written and rewritten in public. The journey — what was killed, what was wrong, what got rebuilt — is The Book of Grinta: start at Preface → 00 · Meaning of Grinta through the numbered chapters to 38 · Vendor-Neutral Bench, then the epilogue 07 · The Road Ahead. Full index and act structure: docs/journey/README.md. Stable shortcut from the repo root: BOOK_OF_GRINTA.md.
Quick start (from source)
Windows (recommended)
.\START_HERE.ps1
Linux / macOS / manual
- Install dependencies in this repo’s environment only (creates/updates
.venv/; do not rely on a globalpip installmixed with unrelated tools):
uv sync --group browser
Optional dev/test tools: uv sync --group dev --group test --group browser.
- Create local settings:
uv run python -m backend.cli.entry init
- Start the CLI:
uv run python -m backend.cli.entry
If you previously installed grinta-ai with pip into a global interpreter, remove it (pip uninstall grinta-ai) and use uv run from this repository so dependencies stay isolated.
Docker (community / experimental)
Use the container image directly (no official compose stack in this repo):
docker run -it --rm -v "$PWD:/work" -w /work \
-e LLM_API_KEY=${LLM_API_KEY} \
ghcr.io/josephsenior/grinta:latest
LLM Setup (settings.json)
When installed through pipx, Homebrew, or Scoop, settings are resolved from ~/.grinta/settings.json. When running from a source checkout, settings resolve from the repository root unless APP_ROOT is set.
Minimal config:
{
"llm_provider": "openai",
"llm_model": "openai/gpt-4o-mini",
"llm_api_key": "${LLM_API_KEY}",
"llm_base_url": ""
}
For manual setup, put the real value in a sibling .env file or your shell environment as LLM_API_KEY; avoid keeping the only copy of a secret directly in settings.json.
Common model ids:
openai/gpt-4o-minianthropic/claude-sonnet-4-20250514google/gemini-2.5-proollama/llama3.2
Core Concepts
Full task loop
Plan -> execute -> observe -> validate -> finish.
Context compaction
Grinta uses compactor strategies to keep long sessions coherent under context limits.
Reliability controls
Stuck detection, retry/recovery flows, and circuit breakers are built into orchestration.
Completion integrity
Task validation can block finish calls when tracked work is incomplete.
Documentation
- Documentation hub
- User Guide
- Quick Start
- Troubleshooting
- Support Matrix
- Architecture
- Developer Guide
- Vocabulary
- The Book of Grinta
- Contributing
- Governance
- Maintainers
- Community
- Roadmap
Contributing
See CONTRIBUTING.md.
License
MIT — see LICENSE.
Third-party Notices
Dependency attribution and notice policy: THIRD_PARTY_NOTICES.md.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi