codex-mcp-tool
Health Warn
- License — License: MIT
- No description — Repository has no description
- Active repo — Last push 0 days ago
- Community trust — 21 GitHub stars
Code Fail
- rm -rf — Recursive force deletion command in scripts/deploy-wiki.sh
Permissions Pass
- Permissions — No dangerous permissions requested
This server bridges AI coding assistants like Claude or Cursor with the Codex CLI, enabling multi-turn code analysis, sandboxed script execution, and local model integration using file references.
Security Assessment
The server primarily acts as a wrapper that executes shell commands via the Codex CLI. Because it facilitates code execution and file manipulation, it inherently accesses local source code and can run system processes. No hardcoded secrets were detected, and the tool does not request elevated or dangerous permissions directly. However, the rule-based scan flagged a `rm -rf` (recursive force deletion) command in a deployment script (`scripts/deploy-wiki.sh`). While this is isolated to a repository maintenance script rather than the core server logic, it is a common indicator of poor script hygiene or a potential supply chain risk if modified. Overall risk is rated as Medium.
Quality Assessment
The project is actively maintained, with the most recent push occurring today. It uses the permissive MIT license. It has a modest but growing community presence with 21 GitHub stars. A minor quality concern is that the repository lacks a proper description, which slightly impacts discoverability and professionalism.
Verdict
Use with caution — while the core functionality is standard and actively maintained, be aware of the inherent risks of the server executing shell commands and review the repository deployment scripts before adopting.
Codex MCP Server
MCP server connecting Claude/Cursor to Codex CLI. Enables code analysis via @ file references, multi-turn conversations, sandboxed edits, and structured change mode.
Features
- File Analysis — Reference files with
@src/,@package.jsonsyntax - Multi-Turn Sessions — Conversation continuity with workspace isolation
- Native Resume — Uses
codex resumefor context preservation (CLI v0.36.0+) - Local OSS Models — Run with Ollama or LM Studio via
localProvider - Web Search — Research capabilities with
search: true - Sandbox Mode — Safe code execution with
--full-auto - Change Mode — Structured OLD/NEW patch output for refactoring
- Brainstorming — SCAMPER, design-thinking, lateral thinking frameworks
- Health Diagnostics — CLI version, features, and session monitoring
- Cross-Platform — Windows, macOS, Linux fully supported
Quick Start
claude mcp add codex-cli -- npx -y @trishchuk/codex-mcp-tool
Prerequisites: Node.js 18+, Codex CLI installed and authenticated.
Configuration
{
"mcpServers": {
"codex-cli": {
"command": "npx",
"args": ["-y", "@trishchuk/codex-mcp-tool"]
}
}
}
Config locations: macOS: ~/Library/Application Support/Claude/claude_desktop_config.json | Windows: %APPDATA%\Claude\claude_desktop_config.json
Usage Examples
// File analysis
'explain the architecture of @src/';
'analyze @package.json and list dependencies';
// With specific model
'use codex with model gpt-5.5 to analyze @algorithm.py';
// Multi-turn conversations (v1.4.0+)
'ask codex sessionId:"my-project" prompt:"explain @src/"';
'ask codex sessionId:"my-project" prompt:"now add error handling"';
// Brainstorming
'brainstorm ways to optimize CI/CD using SCAMPER method';
// Sandbox mode
'use codex sandbox:true to create and run a Python script';
// Web search
'ask codex search:true prompt:"latest TypeScript 5.7 features"';
// Local OSS model (Ollama)
'ask codex localProvider:"ollama" model:"qwen3:8b" prompt:"explain @src/"';
Tools
| Tool | Description |
|---|---|
ask-codex |
Execute Codex CLI with file analysis, models, sessions |
brainstorm |
Generate ideas with SCAMPER, design-thinking, etc. |
list-sessions |
View/delete/clear conversation sessions |
health |
Diagnose CLI installation, version, features |
ping / help |
Test connection, show CLI help |
Models
By default the model parameter is omitted and Codex CLI applies the
default model from your ~/.codex/config.toml (e.g. model = "gpt-5.5").
Pass model only when you need to override the configured default for a
single call. Reasoning depth is calibrated per tool:
ask-codex— uses Codex CLI default reasoning (medium). PassreasoningEffort: "high"/"xhigh"for harder tasks.brainstorm,do-act,review-changes— defaultreasoningEffort: "high"(creative ideation, act-check-fix loops, and code review benefit from deeper reasoning).
| Model | Use Case |
|---|---|
gpt-5.5 |
Frontier model for complex coding, research, agentic work |
gpt-5.4 |
Strong model for everyday coding |
gpt-5.4-mini |
Small, fast, cost-efficient for simpler coding tasks |
gpt-5.3-codex |
Coding-optimized model |
gpt-5.2 |
Optimized for professional work and long-running agents |
Key Features
Session Management (v1.4.0+)
Multi-turn conversations with workspace isolation:
{ "prompt": "analyze code", "sessionId": "my-session" }
{ "prompt": "continue from here", "sessionId": "my-session" }
{ "prompt": "start fresh", "sessionId": "my-session", "resetSession": true }
Environment:
CODEX_SESSION_TTL_MS- Session TTL (default: 24h)CODEX_MAX_SESSIONS- Max sessions (default: 50)
Local OSS Models (v1.6.0+)
Run with local Ollama or LM Studio instead of OpenAI:
// Ollama
{ "prompt": "analyze @src/", "localProvider": "ollama", "model": "qwen3:8b" }
// LM Studio
{ "prompt": "analyze @src/", "localProvider": "lmstudio", "model": "my-model" }
// Auto-select provider
{ "prompt": "analyze @src/", "oss": true }
Requirements: Ollama running locally with a model that supports tool calling (e.g. qwen3:8b).
Advanced Options
| Parameter | Description |
|---|---|
model |
Model selection |
sessionId |
Enable conversation continuity |
sandbox |
Enable --full-auto mode |
search |
Enable web search |
changeMode |
Structured OLD/NEW edits |
addDirs |
Additional writable directories |
toolOutputTokenLimit |
Cap response verbosity (100-10,000) |
reasoningEffort |
Reasoning depth: low, medium, high, xhigh |
oss |
Use local OSS model provider |
localProvider |
Local provider: lmstudio or ollama |
CLI Compatibility
| Version | Features |
|---|---|
| v0.60.0+ | GPT-5.2 model family |
| v0.59.0+ | --add-dir, token limits |
| v0.52.0+ | Native --search flag |
| v0.36.0+ | Native codex resume (sessions) |
Troubleshooting
codex --version # Check CLI version
codex login # Authenticate
Use health tool for diagnostics: 'use health verbose:true'
Migration
v2.2.x → v2.3.0: gpt-5.5 as new default, added gpt-5.4-mini, dropped retired models (gpt-5.3-codex-spark, gpt-5.2-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini).
v2.0.x → v2.1.0: gpt-5.4 as new default model, updated fallback chain.
v1.5.x → v1.6.0: Local OSS model support (localProvider, oss), gpt-5.3-codex default model, xhigh reasoning effort.
v1.3.x → v1.4.0: New sessionId parameter, list-sessions/health tools, structured error handling. No breaking changes.
License
MIT License. Not affiliated with OpenAI.
Documentation | Issues | Inspired by jamubc/gemini-mcp-tool
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found