cross-llm-mcp

mcp
Security Audit
Fail
Health Pass
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 14 GitHub stars
Code Fail
  • process.env — Environment variable access in scripts/postinstall.js
  • process.env — Environment variable access in src/async-job-manager.ts
  • process.env — Environment variable access in src/llm-clients.ts
  • process.env — Environment variable access in src/preferences.ts
  • exec() — Shell command execution in src/prompt-logger.ts
  • process.env — Environment variable access in src/prompt-logger.ts
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This is a Model Context Protocol (MCP) server that acts as a unified gateway to multiple Large Language Model (LLM) APIs. It allows users to query various AI providers (ChatGPT, Claude, Gemini, etc.) simultaneously, manage model preferences, and log prompt history.

Security Assessment
Overall Risk: Medium. The tool legitimately requires environment variable access to retrieve API keys for the various LLM providers it supports. However, the automated scan flagged a critical failure for shell command execution within `src/prompt-logger.ts`. Executing arbitrary shell commands introduces a potential attack vector if the inputs are not properly sanitized, which is a significant security concern in any local server environment. Additionally, by design, the tool makes continuous external network requests to third-party AI APIs. No hardcoded secrets were detected, and it does not request dangerous system permissions.

Quality Assessment
The project appears to be actively maintained, with its last push occurring today. It utilizes the highly permissive MIT license, which is excellent for open-source adoption. However, community trust and overall visibility are currently low, as evidenced by only 14 GitHub stars. As a relatively new and small project, it may lack the extensive peer review and battle-testing found in larger, more established tools.

Verdict
Use with caution — the required external API calls are standard for its purpose, but the presence of shell command execution in the codebase warrants a manual code review before installation to ensure no unsafe execution paths exist.
SUMMARY

A Model Context Protocol (MCP) server that provides access to multiple Large Language Model (LLM) APIs including ChatGPT, Claude, Gemini, Mistral, Kimi K2, and DeepSeek.

README.md

🤖 Cross-LLM MCP Server

Access multiple LLM APIs from one place. Call ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, Mistral, and Hugging Face Inference Router with intelligent model selection, preferences, and prompt logging.

An MCP (Model Context Protocol) server that provides unified access to multiple Large Language Model APIs for AI coding environments like Cursor and Claude Desktop.

Trust Score

Why Use Cross-LLM MCP?

  • 🌐 9 LLM Providers – ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, Mistral, Hugging Face
  • 🎯 Smart Model Selection – Tag-based preferences (coding, business, reasoning, math, creative, general)
  • 📊 Prompt Logging – Track all prompts with history, statistics, and analytics
  • 💰 Cost Optimization – Choose flagship or cheaper models based on preference
  • Easy Setup – One-click install in Cursor or simple manual setup
  • 🔄 Call All LLMs – Get responses from all providers simultaneously

Quick Start

Ready to access multiple LLMs? Install in seconds:

Install in Cursor (Recommended):

🔗 Install in Cursor

Or install manually:

npm install -g cross-llm-mcp
# Or from source:
git clone https://github.com/JamesANZ/cross-llm-mcp.git
cd cross-llm-mcp && npm install && npm run build

Features

🤖 Individual LLM Tools

  • call-chatgpt – OpenAI's ChatGPT API
  • call-claude – Anthropic's Claude API
  • call-deepseek – DeepSeek API
  • call-gemini – Google's Gemini API
  • call-grok – xAI's Grok API
  • call-kimi – Moonshot AI's Kimi API
  • call-perplexity – Perplexity AI API
  • call-mistral – Mistral AI API
  • call-huggingface – Hugging Face Inference Router (OpenAI-compatible Hub models)

🔄 Combined Tools

  • call-all-llms – Call all LLMs with the same prompt
  • call-llm – Call a specific provider by name

⚙️ Preferences & Model Selection

  • get-user-preferences – Get current preferences
  • set-user-preferences – Set default model, cost preference, and tag-based preferences
  • get-models-by-tag – Find models by tag (coding, business, reasoning, math, creative, general)

📝 Prompt Logging

  • get-prompt-history – View prompt history with filters
  • get-prompt-stats – Get statistics about prompt logs
  • delete-prompt-entries – Delete log entries by criteria
  • clear-prompt-history – Clear all prompt logs

Installation

Cursor (One-Click)

Click the install link above or use:

cursor://anysphere.cursor-deeplink/mcp/install?name=cross-llm-mcp&config=eyJjcm9zcy1sbG0tbWNwIjp7ImNvbW1hbmQiOiJucHgiLCJhcmdzIjpbIi15IiwiY3Jvc3MtbGxtLW1jcCJdfX0=

After installation, add your API keys in Cursor settings (see Configuration below).

Manual Installation

Requirements: Node.js 18+ and npm

# Clone and build
git clone https://github.com/JamesANZ/cross-llm-mcp.git
cd cross-llm-mcp
npm install
npm run build

Claude Desktop

Add to claude_desktop_config.json:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "cross-llm-mcp": {
      "command": "node",
      "args": ["/absolute/path/to/cross-llm-mcp/build/index.js"],
      "env": {
        "OPENAI_API_KEY": "your_openai_api_key_here",
        "ANTHROPIC_API_KEY": "your_anthropic_api_key_here",
        "DEEPSEEK_API_KEY": "your_deepseek_api_key_here",
        "GEMINI_API_KEY": "your_gemini_api_key_here",
        "XAI_API_KEY": "your_grok_api_key_here",
        "KIMI_API_KEY": "your_kimi_api_key_here",
        "PERPLEXITY_API_KEY": "your_perplexity_api_key_here",
        "MISTRAL_API_KEY": "your_mistral_api_key_here",
        "HF_TOKEN": "your_huggingface_token_here"
      }
    }
  }
}

Restart Claude Desktop after configuration.

Configuration

API Keys

Set environment variables for the LLM providers you want to use:

export OPENAI_API_KEY="your_openai_api_key"
export ANTHROPIC_API_KEY="your_anthropic_api_key"
export DEEPSEEK_API_KEY="your_deepseek_api_key"
export GEMINI_API_KEY="your_gemini_api_key"
export XAI_API_KEY="your_grok_api_key"
export KIMI_API_KEY="your_kimi_api_key"
export PERPLEXITY_API_KEY="your_perplexity_api_key"
export MISTRAL_API_KEY="your_mistral_api_key"
export HF_TOKEN="your_huggingface_token"
# Or: HUGGINGFACE_API_KEY (same as HF_TOKEN)
# Optional: DEFAULT_HUGGINGFACE_MODEL, HUGGINGFACE_INFERENCE_BASE_URL (default https://router.huggingface.co/v1)

Getting API Keys

Running Hub models locally (outside this MCP)

This server calls Hugging Face’s hosted Inference Router; it does not download weights or run PyTorch/GGUF inside Node. To run models on your machine, use tools such as Ollama, llama.cpp, Text Generation Inference, or Hugging Face Inference Endpoints, then point other clients at those services if they expose an API.

Usage Examples

Call ChatGPT

Get a response from OpenAI:

{
  "tool": "call-chatgpt",
  "arguments": {
    "prompt": "Explain quantum computing in simple terms",
    "temperature": 0.7,
    "max_tokens": 500
  }
}

Call Hugging Face

Get a response from a Hub model via the Inference Router (model is the Hub repo id, e.g. Qwen/Qwen2.5-7B-Instruct):

{
  "tool": "call-huggingface",
  "arguments": {
    "prompt": "Reply with exactly: ok",
    "model": "Qwen/Qwen2.5-7B-Instruct",
    "temperature": 0.3,
    "max_tokens": 32
  }
}

Call All LLMs

Get responses from all providers:

{
  "tool": "call-all-llms",
  "arguments": {
    "prompt": "Write a short poem about AI",
    "temperature": 0.8
  }
}

Set Tag-Based Preferences

Automatically use the best model for each task type:

{
  "tool": "set-user-preferences",
  "arguments": {
    "defaultModel": "gpt-4o",
    "costPreference": "cheaper",
    "tagPreferences": {
      "coding": "deepseek-r1",
      "general": "gpt-4o",
      "business": "claude-3.5-sonnet-20241022",
      "reasoning": "deepseek-r1",
      "math": "deepseek-r1",
      "creative": "gpt-4o"
    }
  }
}

Get Prompt History

View your prompt logs:

{
  "tool": "get-prompt-history",
  "arguments": {
    "provider": "chatgpt",
    "limit": 10
  }
}

Model Tags

Models are tagged by their strengths:

  • coding: deepseek-r1, deepseek-coder, gpt-4o, claude-3.5-sonnet-20241022
  • business: claude-3-opus-20240229, gpt-4o, gemini-1.5-pro
  • reasoning: deepseek-r1, o1-preview, claude-3.5-sonnet-20241022
  • math: deepseek-r1, o1-preview, o1-mini
  • creative: gpt-4o, claude-3-opus-20240229, gemini-1.5-pro
  • general: gpt-4o-mini, claude-3-haiku-20240307, gemini-1.5-flash

Use Cases

  • Multi-Perspective Analysis – Get different perspectives from multiple LLMs
  • Model Comparison – Compare responses to understand strengths and weaknesses
  • Cost Optimization – Choose the most cost-effective model for each task
  • Quality Assurance – Cross-reference responses from multiple models
  • Intelligent Selection – Automatically use the best model for coding, business, reasoning, etc.
  • Prompt Analytics – Track usage, costs, and patterns with automatic logging

Technical Details

Built with: Node.js, TypeScript, MCP SDK
Dependencies: @modelcontextprotocol/sdk, superagent, zod
Platforms: macOS, Windows, Linux

Preference Storage:

  • Unix/macOS: ~/.cross-llm-mcp/preferences.json
  • Windows: %APPDATA%/cross-llm-mcp/preferences.json

Prompt Log Storage:

  • Unix/macOS: ~/.cross-llm-mcp/prompts.json
  • Windows: %APPDATA%/cross-llm-mcp/prompts.json

Contributing

If this project helps you, please star it on GitHub!

Contributions welcome! Please open an issue or submit a pull request.

License

MIT License – see LICENSE.md for details.

Support

If you find this project useful, consider supporting it:

⚡ Lightning Network

lnbc1pjhhsqepp5mjgwnvg0z53shm22hfe9us289lnaqkwv8rn2s0rtekg5vvj56xnqdqqcqzzsxqyz5vqsp5gu6vh9hyp94c7t3tkpqrp2r059t4vrw7ps78a4n0a2u52678c7yq9qyyssq7zcferywka50wcy75skjfrdrk930cuyx24rg55cwfuzxs49rc9c53mpz6zug5y2544pt8y9jflnq0ltlha26ed846jh0y7n4gm8jd3qqaautqa

₿ Bitcoin: bc1ptzvr93pn959xq4et6sqzpfnkk2args22ewv5u2th4ps7hshfaqrshe0xtp

Ξ Ethereum/EVM: 0x42ea529282DDE0AA87B42d9E83316eb23FE62c3f

Reviews (0)

No results found