reachy-mini-mcp

mcp
Security Audit
Pass
Health Pass
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 26 GitHub stars
Code Pass
  • Code scan — Scanned 3 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This MCP server acts as a bridge, allowing AI systems (like Claude or GPT) to physically control a Pollen Robotics Reachy Mini robot. It enables the AI to see, speak, listen, and express emotions through physical movements.

Security Assessment
Overall risk: Low. The automated code scan found no dangerous patterns, hardcoded secrets, or requests for overly broad permissions. The server acts as an intermediary that routes commands to the robot's local daemon. However, it does require you to provide third-party API keys (Deepgram or xAI) in the MCP configuration for voice processing, which means your spoken audio data will be sent over the internet to those external services. The camera (`snap`) tool also captures images, though the processing appears to remain local to the robot's system.

Quality Assessment
The project is in very good health and appears actively maintained, with the latest code push occurring today. It uses the permissive and standard MIT license, making it freely usable for most projects. Community trust is currently low-to-moderate, reflected by 26 GitHub stars. This is expected for a niche hardware integration tool, but it means the codebase hasn't undergone massive peer review.

Verdict
Safe to use, provided you are comfortable routing your robot's audio interactions through external AI voice providers like Deepgram.
SUMMARY

Reachy Mini MCP | Give your AI a body. This MCP server lets AI systems control the Pollen Robotics Reachy Mini robot. | Speak, listen, see you, and express emotions through physical movement and voice. | Works with Claude, Windsurf, Cursor, or any MCP-compatible AI. | Zero robotics expertise required.

README.md

Reachy Mini MCP

Give your AI a body.

This MCP server lets AI systems control Pollen Robotics' Reachy Mini robot—speak, listen, see, and express emotions through physical movement. Works with Claude, GPT, Grok, or any MCP-compatible AI.

7 tools. 30 minutes to first demo. Zero robotics expertise required.

https://reachy-mini-mcp-969sxyq.gamma.site/


For AI Systems

Token-efficient tool reference for programmatic use:

Tool Args Purpose
speak text, listen_after=0 Voice + gesture, optionally listen after
listen duration=3 STT via Deepgram Nova-2
snap - Camera capture (base64 JPEG)
show emotion, move="" Express emotion or play recorded move
look roll, pitch, yaw, z, duration Head positioning (degrees)
rest mode="neutral" neutral / sleep / wake
discover library="emotions" Find available recorded moves

speak()

Supports embedded move markers for choreographed speech:

speak("[move:curious1] What's this? [move:surprised1] Oh wow!")

Moves fire right before their speech chunk. Use listen_after=5 to hear response.

show()

Built-in emotions (fast, local):
neutral, curious, uncertain, recognition, joy, thinking, listening, agreeing, disagreeing, sleepy, surprised, focused

Recorded moves (81 from Pollen):

show(move="loving1")
show(move="fear1")
show(move="serenity1")

Use discover() to see all available moves.


Quick Start

# Install
cd reachy-mini-mcp
poetry install

# Set API key (required for speak/listen)
export DEEPGRAM_API_KEY=your_key_here

# Start simulator daemon
mjpython -m reachy_mini.daemon.app.main --sim --scene minimal

# Run MCP server
poetry run python src/server.py

Architecture

AI (Claude/GPT/Grok) → MCP Server → SDK → Daemon → Robot/Simulator

7 tools following Miller's Law—fits in working memory.

Voice Providers

Provider Status Use Case
Grok Voice ✅ Supported xAI's expressive voice (Eve, Ara, Leo, Rex, Sal)
Deepgram ✅ Supported TTS (Aura 2) + STT (Nova 2)

Grok Voice is used automatically when XAI_API_KEY is set. Falls back to Deepgram otherwise.

MCP Config

Claude Desktop

~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "reachy-mini": {
      "command": "poetry",
      "args": ["-C", "/path/to/reachy-mini-mcp", "run", "python", "src/server.py"],
      "env": {
        "DEEPGRAM_API_KEY": "your_key_here"
      }
    }
  }
}

Claude Code

~/.claude.json:

{
  "mcpServers": {
    "reachy-mini": {
      "command": "poetry",
      "args": ["-C", "/path/to/reachy-mini-mcp", "run", "python", "src/server.py"],
      "env": {
        "DEEPGRAM_API_KEY": "your_key_here"
      }
    }
  }
}

Environment Variables

Variable Required Default Purpose
XAI_API_KEY No - Grok Voice TTS (preferred)
GROK_VOICE No eve Grok voice: ara, eve, leo, rex, sal
DEEPGRAM_API_KEY Yes* - STT (always required for listen) + TTS fallback
REACHY_DAEMON_URL No http://localhost:8321/api Daemon API endpoint

*Required for listen(). Also required for speak() if XAI_API_KEY not set

Requirements

  • Python 3.10+
  • reachy-mini SDK (installed via poetry)
  • MuJoCo (for simulation)
  • Deepgram API key (for speak/listen)

Hardware Notes

  • Simulator: mjpython required on macOS for MuJoCo visualization
  • Real hardware: Same MCP server, daemon auto-connects
  • Port conflicts: Zenoh uses 7447, daemon uses 8321 by default

License

MIT License - see LICENSE

Acknowledgments

Links

Reviews (0)

No results found