AzulClaw

mcp
Security Audit
Warn
Health Warn
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 8 GitHub stars
Code Pass
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This tool is a local-first AI assistant that combines a Tauri desktop shell with a Python orchestration layer and Azure OpenAI. It provides a sandboxed workspace environment, restricting filesystem access via MCP tools to ensure the AI remains isolated and auditable rather than having unrestricted access to the user's machine.

Security Assessment
Risk Rating: Low. The automated code scan checked 12 files and found no dangerous patterns, hardcoded secrets, or requests for risky permissions. By design, the project intentionally limits its own attack surface by containing filesystem actions within a strict workspace boundary. However, it does make external network requests to Azure services and operates a local HTTP API on port 3978 to communicate with the desktop frontend. Sensitive data handling depends entirely on the user properly configuring the provided `.env.local` file with their own private Azure credentials.

Quality Assessment
The project is actively maintained, with its most recent push occurring today. It uses the permissive MIT license and features a clean, professional architecture alongside a detailed README. The only notable weakness is very low community visibility. Having only 8 GitHub stars means it has not yet undergone widespread peer review or battle-testing by a larger developer community.

Verdict
Safe to use, though its newness and low community adoption make it best suited for experimental or internal development rather than immediate enterprise production.
SUMMARY

Secure hybrid AI assistant brain reinvented from OpenClaw, combining Microsoft Agent Framework + Azure OpenAI with zero-trust MCP sandboxed desktop tools.

README.md

AzulClaw

AzulClaw

Join the AzulClaw Discord community

Build AzulClaw with us, share feedback, and follow product progress in the community server.

AzulClaw is a local-first AI companion that combines a secure desktop workspace, a Python orchestration layer, and Azure-backed reasoning. The product is designed around one constraint: the assistant must be useful without being allowed to roam freely across the user's machine.

What AzulClaw is

  • A desktop shell built with Tauri, React, and TypeScript.
  • A local Python runtime that handles chat, memory, scheduling, process tracking, and Bot Framework activities.
  • A sandboxed file tool layer exposed through MCP so filesystem access stays isolated and auditable.
  • An optional Azure relay for public channels such as Telegram or Alexa without exposing the local runtime directly.

Architecture at a glance

Desktop UI (Tauri + React)
        |
        v
Local HTTP API (aiohttp)
        |
        +--> Conversation orchestrator
        +--> Runtime scheduler and heartbeats
        +--> SQLite memory
        +--> Bot Framework adapter
        |
        v
MCP sandbox (filesystem tools inside workspace boundary)

For public channels, the production path is:

Channel -> Azure Bot Service -> Azure Function -> Azure Service Bus -> Local AzulClaw

Repository layout

AzulClaw/
|- azul_backend/     Python runtime, memory, channels, MCP integration
|- azul_desktop/     Desktop shell and frontend views
|- azure/            Azure relay resources and deployment artifacts
|- docs/             Canonical product and technical documentation
|- memory/           Local runtime state generated during development
|- scripts/          Utility scripts
|- README.md
`- requirements.txt

Quick start

1. Install backend dependencies

python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -r requirements.txt

2. Configure the backend

Copy-Item azul_backend\azul_brain\.env.example azul_backend\azul_brain\.env.local

Fill in the Azure values you want to use. For local desktop iteration, only the backend config is required.

3. Start the backend

python -m azul_backend.azul_brain.main_launcher

The local API listens on http://localhost:3978.

4. Start the desktop shell

cd azul_desktop
npm install
npm run dev

For the native Tauri shell:

npm run tauri:dev

Core capabilities

  • Fast and slow model lanes with automatic triage.
  • Streaming desktop chat over NDJSON.
  • Local persistent memory in SQLite with vector, keyword, and hybrid retrieval.
  • Hatching flow for profile and workspace setup.
  • Workspace browsing restricted to a dedicated sandbox root.
  • Heartbeats and scheduled jobs stored locally.
  • Optional Azure relay for Bot Framework channels.

Documentation

Start with Documentation Hub.

Recommended reading order:

  1. Architecture Overview
  2. Setup and Development
  3. Security Model
  4. Component Reference
  5. Memory System

Notes for contributors

  • Keep documentation in English.
  • Treat docs/ as the canonical source for product and architecture decisions.
  • Do not commit .env.local, generated workspace data, or credentials.
  • The MCP sandbox is a security boundary, not a convenience wrapper.

Reviews (0)

No results found