openclaude
Health Warn
- No license — Repository has no license file
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 11 GitHub stars
Code Fail
- network request — Outbound network request in package.json
- process.env — Environment variable access in scripts/provider-bootstrap.ts
- network request — Outbound network request in scripts/provider-bootstrap.ts
- process.env — Environment variable access in scripts/provider-launch.ts
- network request — Outbound network request in scripts/provider-launch.ts
- spawnSync — Synchronous process spawning in scripts/system-check.ts
- process.env — Environment variable access in scripts/system-check.ts
- network request — Outbound network request in scripts/system-check.ts
- process.env — Environment variable access in src/QueryEngine.ts
Permissions Pass
- Permissions — No dangerous permissions requested
This tool is an unofficial, modified version of Claude Code designed to work with any OpenAI-compatible Large Language Model (such as GPT-4, Gemini, or Llama). It enables developers to use Claude Code's built-in tools and features while routing the underlying requests to alternative AI providers.
Security Assessment
Overall Risk: High. The application functions by executing shell commands, reading and writing files, and spawning synchronous processes (specifically flagged in `system-check.ts`). It makes multiple outbound network requests to external APIs to process your prompts, which requires careful handling. Users must supply API keys via environment variables, which is standard practice, but there is a major legal and security caveat: the code is explicitly based on a leaked source code map. Additionally, the repository lacks a license file, meaning any use technically violates copyright law.
Quality Assessment
While the project is relatively new with only 11 GitHub stars, it is very active with recent updates. However, the overall quality is severely undermined by its legal status. Forking and redistributing leaked, proprietary commercial code without an open-source license poses significant intellectual property risks for any developer who chooses to install or contribute to it.
Verdict
Not recommended due to severe legal and intellectual property risks stemming from the distribution of leaked proprietary source code.
OpenSource Version Of Claude Code (mirrored)
OpenClaude
Use Claude Code with any LLM — not just Claude.
OpenClaude is a fork of the Claude Code source leak (exposed via npm source maps on March 31, 2026). We added an OpenAI-compatible provider shim so you can plug in GPT-4o, DeepSeek, Gemini, Llama, Mistral, or any model that speaks the OpenAI chat completions API.
All of Claude Code's tools work — bash, file read/write/edit, grep, glob, agents, tasks, MCP — just powered by whatever model you choose.
Install
Option A: npm (recommended)
npm install -g openclaude
Option B: From source (requires Bun)
# Clone from gitlawb
git clone https://github.com/hatixntsoa/openclaude.git
cd openclaude
# Install dependencies
bun install
# Build
bun run build
# Link globally (optional)
npm link
Option C: Run directly with Bun (no build step)
git clone https://github.com/hatixntsoa/openclaude.git
cd openclaude
bun install
bun run dev
Quick Start
1. Set 3 environment variables
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o
2. Run it
# If installed via npm
openclaude
# If built from source
bun run dev
# or after build:
node dist/cli.mjs
That's it. The tool system, streaming, file editing, multi-step reasoning — everything works through the model you picked.
Provider Examples
OpenAI
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_MODEL=gpt-4o
DeepSeek
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
Google Gemini (via OpenRouter)
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-or-...
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash
Ollama (local, free)
ollama pull llama3.3:70b
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70b
# no API key needed for local models
LM Studio (local)
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
Together AI
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turbo
Groq
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=gsk_...
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatile
Mistral
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.mistral.ai/v1
export OPENAI_MODEL=mistral-large-latest
Azure OpenAI
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=your-azure-key
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o
Environment Variables
| Variable | Required | Description |
|---|---|---|
CLAUDE_CODE_USE_OPENAI |
Yes | Set to 1 to enable the OpenAI provider |
OPENAI_API_KEY |
Yes* | Your API key (*not needed for local models like Ollama) |
OPENAI_MODEL |
Yes | Model name (e.g. gpt-4o, deepseek-chat, llama3.3:70b) |
OPENAI_BASE_URL |
No | API endpoint (defaults to https://api.openai.com/v1) |
You can also use ANTHROPIC_MODEL to override the model name. OPENAI_MODEL takes priority.
What Works
- All tools: Bash, FileRead, FileWrite, FileEdit, Glob, Grep, WebFetch, WebSearch, Agent, MCP, LSP, NotebookEdit, Tasks
- Streaming: Real-time token streaming
- Tool calling: Multi-step tool chains (the model calls tools, gets results, continues)
- Images: Base64 and URL images passed to vision models
- Slash commands: /commit, /review, /compact, /diff, /doctor, etc.
- Sub-agents: AgentTool spawns sub-agents using the same provider
- Memory: Persistent memory system
What's Different
- No thinking mode: Anthropic's extended thinking is disabled (OpenAI models use different reasoning)
- No prompt caching: Anthropic-specific cache headers are skipped
- No beta features: Anthropic-specific beta headers are ignored
- Token limits: Defaults to 32K max output — some models may cap lower, which is handled gracefully
How It Works
The shim (src/services/api/openaiShim.ts) sits between Claude Code and the LLM API:
Claude Code Tool System
|
v
Anthropic SDK interface (duck-typed)
|
v
openaiShim.ts <-- translates formats
|
v
OpenAI Chat Completions API
|
v
Any compatible model
It translates:
- Anthropic message blocks → OpenAI messages
- Anthropic tool_use/tool_result → OpenAI function calls
- OpenAI SSE streaming → Anthropic stream events
- Anthropic system prompt arrays → OpenAI system messages
The rest of Claude Code doesn't know it's talking to a different model.
Model Quality Notes
Not all models are equal at agentic tool use. Here's a rough guide:
| Model | Tool Calling | Code Quality | Speed |
|---|---|---|---|
| GPT-4o | Excellent | Excellent | Fast |
| DeepSeek-V3 | Great | Great | Fast |
| Gemini 2.0 Flash | Great | Good | Very Fast |
| Llama 3.3 70B | Good | Good | Medium |
| Mistral Large | Good | Good | Fast |
| GPT-4o-mini | Good | Good | Very Fast |
| Qwen 2.5 72B | Good | Good | Medium |
| Smaller models (<7B) | Limited | Limited | Very Fast |
For best results, use models with strong function/tool calling support.
Files Changed from Original
src/services/api/openaiShim.ts — NEW: OpenAI-compatible API shim (724 lines)
src/services/api/client.ts — Routes to shim when CLAUDE_CODE_USE_OPENAI=1
src/utils/model/providers.ts — Added 'openai' provider type
src/utils/model/configs.ts — Added openai model mappings
src/utils/model/model.ts — Respects OPENAI_MODEL for defaults
src/utils/auth.ts — Recognizes OpenAI as valid 3P provider
6 files changed. 786 lines added. Zero dependencies added.
Origin
This is a fork of instructkr/claude-code, which mirrored the Claude Code source snapshot that became publicly accessible through an npm source map exposure on March 31, 2026.
The original Claude Code source is the property of Anthropic. This repository is not affiliated with or endorsed by Anthropic.
License
This repository is provided for educational and research purposes. The original source code is subject to Anthropic's terms. The OpenAI shim additions are public domain.
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found