wechat-ai-bridge

agent
Guvenlik Denetimi
Basarisiz
Health Uyari
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 5 GitHub stars
Code Basarisiz
  • exec() — Shell command execution in adapters/claude.js
  • process.env — Environment variable access in adapters/claude.js
  • process.env — Environment variable access in adapters/codex.js
  • process.env — Environment variable access in adapters/gemini.js
  • network request — Outbound network request in adapters/gemini.js
  • exec() — Shell command execution in bridge.js
  • process.env — Environment variable access in bridge.js
  • process.env — Environment variable access in config.js
  • process.env — Environment variable access in dir-manager.js
  • exec() — Shell command execution in file-ref-protect.js
Permissions Gecti
  • Permissions — No dangerous permissions requested

Bu listing icin henuz AI raporu yok.

SUMMARY

Run full Claude Code / Codex / Gemini in WeChat — not an API wrapper. Session management, tool approval, bidirectional file relay. Official iLink API, zero ban risk.

README.md

wechat-ai-bridge

Full AI Coding Agents. In WeChat. Private Chat.

Run actual Claude Code / Codex / Gemini from WeChat — not an API wrapper — with session management, tool approval, and file relay.

A self-hosted WeChat bridge that connects to AI coding agents via the official iLink Bot API. No VPN needed. Zero ban risk.

MIT License
Version
Bun
WeChat

English | 简体中文

How is this different from cc-weixin / wechat-acp?

They connect one AI backend with basic message forwarding. This project adds session management (/new /resume /sessions), multi-backend switching (Claude + Codex + Gemini), tool approval interaction, and bidirectional file relay — the same workflow you get from telegram-ai-bridge, now in WeChat.


What This Unlocks

Full AI Agent in Your Pocket — No VPN

Send a message in WeChat. Get a full Claude Code response — with Bash, Read, Write, Edit, Glob, Grep, WebFetch, and all native tools. No terminal. No VPN. Works anywhere WeChat works.

Session Management

你: /new              ← Start a fresh session
你: /sessions         ← List recent sessions
你: /resume 3         ← Pick up where you left off
你: /backend codex    ← Switch backend
你: /model            ← Switch models (reply with number)

Sessions persist in SQLite across restarts. Pick up after a reboot, a network drop, or a flight. /sessions shows all CC sessions — including ones created from terminal CLI or other bridges — with the last user message as the title. Resume by number: /resume 3.

Multi-Backend Support

Backend SDK Status
claude Claude Code (via Agent SDK) Recommended
codex Codex CLI (via Codex SDK) Recommended
gemini Gemini Code Assist API Experimental

Switch backends per chat with /backend.

Tool Approval via Chat

When the AI needs permission to run a tool:

🔒 工具审批

工具: Bash
git push origin main

请回复数字:
1. 允许
2. 拒绝
3. 始终允许 "Bash"
4. YOLO(全部允许)

Reply 1, 2, 3, or 4. No buttons needed — just text.

Bidirectional File Relay

  • Send photos/files to AI: WeChat media is downloaded, decrypted (AES-128-ECB), and injected into the prompt
  • Receive files from AI: AI-generated files and screenshots are encrypted, uploaded to CDN, and sent back to your WeChat chat
  • Long code output: >2000 chars with >60% code → sent as file attachment with preview

Built-in Resilience

  • Rate limiting: Per-user sliding window
  • Idle monitoring: Watchdog timer for hung tasks
  • Message batching: FlushGate merges rapid consecutive messages (800ms window)
  • Send retry: Exponential backoff with error classification
  • File reference protection: Prevents auto-linking of .md, .go, .py filenames

Quick Start

Prerequisites: Bun runtime, WeChat version >= 2026.3.20, and at least one backend CLI: Claude Code, Codex, or Gemini CLI.

git clone https://github.com/AliceLJY/wechat-ai-bridge.git
cd wechat-ai-bridge
npm install              # or: bun install
bun run bootstrap --backend claude
bun run check --backend claude
bun run start --backend claude

On first launch, a QR code appears in your terminal. Scan it with WeChat to authenticate. Token is saved to ~/.wechat-ai-bridge/token.json for subsequent launches.


WeChat Commands

All commands are plain text — just type and send:

Command Description
/help Show all commands
/new Start a new session
/cancel Abort the running task
/sessions List recent sessions
/resume <#|id> Resume by sequence number or session ID
/backend [name] Switch backend (claude/codex/gemini)
/model [name] Pick a model (reply with number)
/effort [level] Set thinking depth
/status Show backend, model, cwd, session
/dir [path] Switch working directory
/verbose 0|1|2 Change progress verbosity

How It Works

WeChat App ←→ iLink Server (ilinkai.weixin.qq.com) ←→ wechat-ai-bridge ←→ AI Backend
                                                         │
                                                         ├── weixin/     (iLink connection)
                                                         ├── adapters/   (Claude/Codex/Gemini)
                                                         ├── sessions.js (SQLite persistence)
                                                         └── bridge.js   (core message loop)

The bridge uses WeChat's official iLink Bot API — the same protocol behind OpenClaw's WeChat integration. All communication is standard HTTP/JSON with long-polling (getupdates), similar to Telegram's Bot API. Media files are encrypted with AES-128-ECB on CDN.

This is an official Tencent API. Zero ban risk. Backed by WeChat ClawBot Terms of Use.


vs. Existing Projects

Feature cc-weixin wechat-acp This project
AI backends Claude only 6 via ACP Claude + Codex + Gemini (native SDK)
Session management None None /new /resume /sessions /backend
Tool approval Auto-allow Auto-allow Interactive (1/2/3/4 choice)
Model switching Hardcoded Per-preset /model with numbered selection
File relay (in) Text only Images+files Images + files (AES-128-ECB decrypted)
File relay (out) None None Auto-detect file paths + CDN upload
Rate limiting None None Per-user sliding window
Idle monitoring None None Watchdog + auto-reset
Message batching None None FlushGate (800ms merge)
Cross-platform sessions None None See all CC sessions (CLI + other bridges)
Code as file None None >60% code → file attachment

Configuration

bun run bootstrap --backend claude generates a starter config.json.

{
  "shared": {
    "cwd": "/Users/you",
    "defaultVerboseLevel": 1,
    "executor": "direct",
    "rateLimitMaxRequests": 10,
    "rateLimitWindowMs": 60000,
    "idleTimeoutMs": 1800000
  },
  "backends": {
    "claude": {
      "enabled": true,
      "sessionsDb": "sessions.db",
      "model": "claude-sonnet-4-6",
      "permissionMode": "default"
    }
  }
}
Project Structure
  • start.js — CLI entry with QR login flow
  • config.js — Config loader and setup wizard
  • bridge.js — Core message loop
  • progress.js — Typing indicator + processing status
  • sessions.js — SQLite session persistence
  • weixin/api.js — iLink HTTP client
  • weixin/auth.js — QR code login + token persistence
  • weixin/monitor.js — Long-polling message listener
  • weixin/send.js — Message sending + text chunking
  • weixin/media.js — CDN upload/download + AES-128-ECB
  • weixin/types.js — iLink protocol constants
  • adapters/ — AI backend integrations (Claude/Codex/Gemini)
  • executor/ — Execution modes
iLink Protocol Notes
  • Authentication: QR code scan → bot_token (persisted to ~/.wechat-ai-bridge/token.json)
  • Message flow: getupdates long-poll (35s hold) → process → sendmessage with context_token
  • Media: AES-128-ECB encrypted CDN (novac2c.cdn.weixin.qq.com)
  • Limitation: 1 WeChat account = 1 bot (1:1 binding)
  • No group chat: iLink does not support group messaging

For protocol details, see research.md.


Adding Custom Backends

The adapter interface is designed for easy extension. To add a new AI backend (e.g. OpenCode, Crush, or any CLI agent):

  1. Create adapters/yourbackend.js exporting createAdapter(config):
export function createAdapter(config = {}) {
  return {
    name: "yourbackend",
    async *streamQuery({ prompt, sessionId, model, cwd, abortController }) {
      yield { type: "session_init", sessionId: "..." };
      yield { type: "text", text: "response chunk" };
      yield { type: "result", success: true, text: "final answer" };
    },
    statusInfo() {
      return { backend: "yourbackend", model: "...", session: "..." };
    }
  };
}
  1. Register it in adapters/interface.js and add the name to AVAILABLE_BACKENDS in config.js.

Event types: session_init | progress (tool use indicators) | text (streaming chunks) | result (final).

Community interest: OpenCode / Crush support has been requested. OpenCode offers a JS/TS SDK (@opencode-ai/sdk); Crush provides a Unix socket REST API with SSE streaming. PRs welcome!


Ecosystem

Part of the 小试AI open-source AI workflow:

Project Description
telegram-ai-bridge Same architecture, Telegram interface
recallnest MCP memory workbench
content-alchemy 5-stage AI writing pipeline
openclaw-tunnel Docker ↔ host CLI bridge

License

MIT

Yorumlar (0)

Sonuc bulunamadi