dxa-agent

agent
Guvenlik Denetimi
Uyari
Health Uyari
  • License — License: NOASSERTION
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 6 GitHub stars
Code Uyari
  • network request — Outbound network request in package.json
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
This is a terminal-based AI coding agent that connects to various local or cloud-based language models to assist with development tasks. It is distributed as both a CLI tool and a VS Code extension.

Security Assessment
Risk Rating: Medium. As an AI coding agent, this tool inherently executes shell commands, interacts with your local file system, and modifies code. Additionally, the audit flags outbound network requests in the package dependencies. This is expected behavior since the tool must communicate with external AI APIs (like OpenAI, Anthropic, or local Ollama instances) to function, but users should be aware that code context and prompts are sent over the internet. The scan noted a "NOASSERTION" license status, though the documentation explicitly claims it uses an MIT license. No dangerous permissions or hardcoded secrets were detected in the codebase.

Quality Assessment
The project is highly active, with the most recent code push occurring today. It features good software hygiene, providing comprehensive documentation, CI checks, and a dedicated security policy. However, the tool suffers from extremely low community visibility. With only 6 GitHub stars, it has not undergone the widespread peer review and testing that established alternatives have. It is an independent fork tailored with specific packaging, legal framing, and documentation, but users are relying on a very small development surface.

Verdict
Use with caution: while the project is actively maintained and transparently documented, its extremely low community adoption means it has seen limited external security auditing, so you should carefully monitor its network requests and file system access.
SUMMARY

A terminal AI coding agent that works across local or cloud models. Use local models via Ollama to APIs like OpenAI-compatible services.

README.md

OpenClaude

OpenClaude is an MIT-licensed terminal coding agent: one openclaude command, pluggable model backends (Anthropic Claude, OpenAI-compatible APIs, Gemini, GitHub Models, Ollama, Atomic Chat, and others), tools, MCP, and slash commands. This repo ships the CLI plus a VS Code extension and a dark terminal theme.

Legal: not affiliated with Anthropic, PBC, or any other vendor. Trademarks, MIT terms, and how to raise concerns: LEGAL.md (general information only—not legal advice).

Upstream: this repo is an independent distribution (@dxiv/openclaude on npm). Core CLI behaviour is periodically aligned with Gitlawb/openclaude (see docs/maintainers.md for how to sync src/). Docs, legal framing, CI, and packaging here are specific to this fork.

PR Checks
Release
Discussions
Security Policy
License

Quick start · Setup · Providers · Source build · Repo layout · VS Code · Contributing · Security · Community

New to terminals or npm? docs/non-technical-setup.mdWindows or macOS / Linuxchecklistfirst run.
All docs: docs/README.md.

Why use it

  • One CLI for cloud APIs and local inference
  • /provider for guided setup and a saved profile
  • Bash, file tools, grep/glob, agents, tasks, MCP, web helpers
  • Optional VS Code integration from this repo

Quick start

You need Node.js 20+ and a terminal. If that’s new territory, use docs/non-technical-setup.md first.

Install

npm install -g @dxiv/openclaude

Install ripgrep and ensure rg is on your PATH. If the CLI prints ripgrep not found, fix PATH, then open a new terminal window — Troubleshooting has more detail.

Start

openclaude

Inside OpenClaude:

  • run /provider for guided provider setup and saved profiles
  • run /onboard-github for GitHub Models onboarding

Fastest OpenAI setup

macOS / Linux:

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o

openclaude

Windows PowerShell:

$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_MODEL="gpt-4o"

openclaude

Fastest local Ollama setup

macOS / Linux:

export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=qwen2.5-coder:7b

openclaude

Windows PowerShell:

$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_BASE_URL="http://localhost:11434/v1"
$env:OPENAI_MODEL="qwen2.5-coder:7b"

openclaude

Setup guides

Index: docs/README.md · Checklist: docs/setup-checklist.md · After install: docs/first-run.md · Problems: docs/troubleshooting.md

Beginner-friendly:

Advanced / source build:

  • Advanced setup — Bun, profiles, doctor:*, env table
  • .env.example — template in git; copy to .env for a local clone, uncomment one provider block (see file header)
  • Android (Termux) — build inside proot Ubuntu

Optional: python/ — small Python helpers for experiments; not required for normal CLI install (python/README.md).

Supported providers

Provider Setup Path Notes
Anthropic (Claude) /provider or env vars Cloud default path; set ANTHROPIC_API_KEY in .env (layout in .env.example)
OpenAI-compatible /provider or env vars Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and other compatible /v1 servers
Gemini /provider or env vars Supports API key, access token, or local ADC workflow on current main
GitHub Models /onboard-github Interactive onboarding with saved credentials
Codex /provider Uses existing Codex credentials when available
Ollama /provider or env vars Local inference with no API key
Atomic Chat advanced setup Local Apple Silicon backend
Bedrock / Vertex / Foundry env vars Additional provider integrations for supported environments

What works

  • Tool-driven coding workflows: Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands
  • Streaming responses: Real-time token output and tool progress
  • Tool calling: Multi-step tool loops with model calls, tool execution, and follow-up responses
  • Images: URL and base64 image inputs for providers that support vision
  • Provider profiles: Guided setup plus saved .openclaude-profile.json support
  • Local and remote model backends: Cloud APIs, local servers, and Apple Silicon local inference

Provider notes

OpenClaude supports multiple providers, but behaviour is not identical across all of them.

  • Anthropic-specific features may not exist on other providers
  • Tool quality depends heavily on the selected model
  • Smaller local models can struggle with long multi-step tool flows
  • Some providers impose lower output caps than the CLI defaults, and OpenClaude adapts where possible

For best results, use models with strong tool/function calling support.

Agent routing

OpenClaude can route different agents to different models through settings-based routing. This is useful for cost optimisation or splitting work by model strength.

Add to ~/.claude/settings.json:

{
  "agentModels": {
    "deepseek-chat": {
      "base_url": "https://api.deepseek.com/v1",
      "api_key": "sk-your-key"
    },
    "gpt-4o": {
      "base_url": "https://api.openai.com/v1",
      "api_key": "sk-your-key"
    }
  },
  "agentRouting": {
    "Explore": "deepseek-chat",
    "Plan": "gpt-4o",
    "general-purpose": "gpt-4o",
    "frontend-dev": "deepseek-chat",
    "default": "gpt-4o"
  }
}

When no routing match is found, the global provider remains the fallback.

api_key values in settings.json are plaintext. Don’t commit that file.

Web search and fetch

By default, WebSearch works on non-Anthropic models using DuckDuckGo. This gives GPT-4o, DeepSeek, Gemini, Ollama, and other OpenAI-compatible providers a free web search path out of the box.

DuckDuckGo fallback scrapes search results; it can be rate-limited or blocked. For something sturdier, wire up Firecrawl below.

For Anthropic-native backends and Codex responses, OpenClaude keeps the native provider web search behaviour.

WebFetch works, but its basic HTTP plus HTML-to-markdown path can still fail on JavaScript-rendered sites or sites that block plain HTTP requests.

Set a Firecrawl API key if you want Firecrawl-powered search/fetch behaviour:

export FIRECRAWL_API_KEY=your-key-here

With Firecrawl enabled:

  • WebSearch can use Firecrawl's search API while DuckDuckGo remains the default free path for non-Claude models
  • WebFetch uses Firecrawl's scrape endpoint instead of raw HTTP, handling JS-rendered pages correctly

Free tier at firecrawl.dev includes 500 credits. The key is optional.

Source build and local development

bun install
bun run build
node dist/cli.mjs

From a clone: create .env from .env.example, uncomment one provider block, put real values in .env (the example file header explains the fields).

Bun is what the repo scripts expect. Common commands:

  • bun run typecheck
  • bun run dev
  • bun test
  • bun run test:coverage
  • bun run security:pr-scan -- --base origin/main
  • bun run smoke
  • bun run doctor:runtime
  • bun run verify:privacy
  • focused bun test ... for the areas you touch

Tags: pushing a v* tag runs release artefacts (uploads dist/cli.mjs as a CI artefact). Maintainer checklist: docs/maintainers.md.

Testing and coverage

Tests use Bun’s built-in runner.

bun test

Coverage (writes coverage/lcov.info and a heatmap at coverage/index.html):

bun run test:coverage

Open the HTML report: macOS / Linux open coverage/index.html — Windows PowerShell: start coverage/index.html.

Rebuild only the coverage UI from an existing lcov.info:

bun run test:coverage:ui

Targeted runs:

  • bun run test:provider
  • bun run test:provider-recommendation
  • bun test path/to/file.test.ts

Before opening a PR, a sensible smoke pass is bun run build, bun run smoke, then either focused bun test … on what you touched or bun run test:coverage if you changed shared runtime or provider code.

Repository structure

The CLI is built from src/ into dist/cli.mjs; bin/openclaude is the published entrypoint npm calls. Everything else is documentation, build/CI tooling, the VS Code add-on, optional python/ helpers, or policy files at the repo root — each path is described under Paths below.

Layout

flowchart TB
  subgraph DOC[Documentation]
    direction LR
    D1[docs/]
    D2[README.md]
    D3[ANDROID_INSTALL.md]
  end
  subgraph AGENT[Terminal agent]
    direction LR
    A1[src/]
    A2[bin/]
    A3[package.json]
    A4[tsconfig.json]
  end
  subgraph OUT[Build output]
    O1[dist/cli.mjs]
  end
  subgraph META[Tooling and meta]
    direction LR
    M1[scripts/]
    M2[vscode-extension/]
    M3[python/]
    M4[.github/]
    M5[.env]
  end
  AGENT --> OUT

.env is what you edit on your machine (gitignored). .env.example is only the checked-in template — copy it to .env once, then change .env, not the example file.

Clone vs npm install

A full git clone matches the chart. npm install -g @dxiv/openclaude only unpacks what package.json lists under "files" — right now bin/, dist/cli.mjs, and README.md.

flowchart LR
  subgraph CLONE[Git clone]
    C1[entire repo]
  end
  subgraph NPM[npm package]
    direction TB
    N1[bin/]
    N2[dist/cli.mjs]
    N3[README.md]
  end

Paths

Documentation

  • docs/ — User guides: index, checklist, first run, troubleshooting
  • ANDROID_INSTALL.md — Build inside Termux / proot Ubuntu
  • README.md — Project overview (also included in the npm tarball)

Terminal agent

  • src/ — Core CLI and runtime (providers, tools, MCP, UI)
  • bin/openclaude launcher (runs dist/cli.mjs when built)
  • package.json — Metadata, scripts, and the published files list
  • tsconfig.json — TypeScript project for src/

Build and checks

  • scripts/ — Build pipeline, doctor:*, security scans, coverage helpers

Editor add-on

  • vscode-extension/openclaude-vscode/ — VS Code integration and terminal theme (extension readme)

Optional

Repository / CI

  • .github/PR checks, v* release artefacts, Dependabot, issue/PR templates
  • .env — Your provider keys when working from a clone (gitignored). Duplicate .env.example to .env, then edit .env only (cp .env.example .env on Unix; Copy-Item .env.example .env in PowerShell).
  • .env.example — Reference template in the repo; do not put secrets here.
  • RootCONTRIBUTING.md, CHANGELOG.md, LEGAL.md, LICENSE, SECURITY.md

VS Code extension

vscode-extension/openclaude-vscode/: launch the CLI from the editor, Control Centre in the activity bar, bundled terminal theme. Extension readme.

Security

If you believe you found a security issue, see SECURITY.md.

Community

  • Discussions — questions, ideas, general chat
  • Issues — bugs and concrete feature requests

Contributing

CONTRIBUTING.md covers clone, bun install, build, and what CI expects. Big or ambiguous changes: open an issue before a huge PR.

Legal / trademarks

MIT applies to material in this repository; dependencies have their own licences. Third-party names appear only where descriptive (see LEGAL.md). Full licence text: LICENSE. Not legal advice—consult a solicitor or other qualified legal adviser if you need a formal opinion.

Yorumlar (0)

Sonuc bulunamadi