bubbaloop
Health Gecti
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 15 GitHub stars
Code Gecti
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
This tool is an open-source Hardware AI agent that orchestrates cameras, sensors, robots, and IoT fleets. It operates as a local MCP server, allowing AI agents to interact with physical hardware systems via natural language.
Security Assessment
Overall Risk: Medium. The tool inherently deals with sensitive hardware and network operations. It requires handling secrets, specifically Anthropic API keys and Claude OAuth tokens, for its AI agent features. The install process uses a remote bash script (`curl | bash`), which requires careful user review. Additionally, it opens several local network ports (7447, 10001, 8080) for its pub/sub router, WebSocket bridge, and web dashboard. While the automated code scan found no hardcoded secrets or dangerous patterns and noted no dangerous permissions, deploying a tool that controls physical hardware and opens local services warrants careful network configuration.
Quality Assessment
The project demonstrates excellent baseline health. It is actively maintained (last push was today) and uses the permissive Apache-2.0 license. It is written in Rust, which naturally provides strong memory safety guarantees for system-level operations. However, community trust and adoption are currently very low, as indicated by only 15 GitHub stars. Being an early-stage project, it might contain unpolished edge cases or bugs.
Verdict
Use with caution — the automated security scan is clean and it is actively maintained, but its low community adoption and inherent access to physical hardware require strict network and secret management oversight.
Open-source Hardware AI agent. Single Rust binary for cameras, sensors, robots, and IoT fleets — orchestrated by AI agents with memory and real-time telemetry. Runs on Jetson, Raspberry Pi, any Linux.
🦐 Bubbaloop
"Shrimp-fried cameras, shrimp-grilled sensors, shrimp-sauteed robots..."
— Bubba, on all the ways to talk to your hardware 🦐
The open-source Hardware AI agent. Talk to your cameras, sensors, and robots in natural language. Manage federated IoT/robotics fleets and automate physical systems — all from a single 13 MB Rust binary.
Why Bubbaloop?
AI agents revolutionized software engineering. Bubbaloop brings that same power to hardware.
| General AI Agents | Bubbaloop | |
|---|---|---|
| Focus | Software tasks, coding, browsing | Cameras, sensors, robots, IoT |
| Runtime | TypeScript / ~200 MB | Rust / ~13 MB |
| Data plane | None | Zenoh (zero-copy pub/sub) |
| Hardware | None | Self-describing sensor nodes |
| Runs on | Desktop / cloud | Jetson, RPi, any Linux ARM64/x86 |
| MCP role | Client (consumes tools) | Server (42+ tools, 3-tier RBAC) |
| Scheduling | Always-on LLM (~$5-10/day) | Offline Tier 1 + LLM Tier 2 (~$0.05/day) |
Quick Install
# One-line install (Linux x86_64/ARM64)
curl -sSL https://github.com/kornia/bubbaloop/releases/latest/download/install.sh | bash
source ~/.bashrc
# Verify
bubbaloop status
What Gets Installed
| Component | Description |
|---|---|
zenohd |
Pub/sub router on port 7447 |
zenoh-bridge-remote-api |
WebSocket bridge on port 10001 |
bubbaloop |
Single ~13 MB binary: CLI + daemon + MCP server |
| Dashboard | Web UI at http://localhost:8080 |
All run as systemd user services with autostart enabled.
Login & Authentication
# Option 1: API Key (pay-as-you-go)
bubbaloop login
# → Choose [1], paste your key from console.anthropic.com
# Option 2: Claude Subscription (Pro/Max/Team)
claude setup-token # Run in Claude Code CLI first
bubbaloop login
# → Choose [2], paste the sk-ant-oat01-* token
# Check auth status
bubbaloop login --status
# Remove credentials
bubbaloop logout
Basic Usage
# Check system status
bubbaloop status
# Start daemon (runs agent runtime + MCP server + node manager)
bubbaloop up
# Talk to your hardware via Claude AI (agents run daemon-side)
bubbaloop agent chat "What sensors do I have?"
bubbaloop agent chat # Interactive REPL
bubbaloop agent chat -a camera-expert "describe the video feed"
bubbaloop agent list # Show running agents + models
# System diagnostics with auto-fix
bubbaloop doctor --fix
# Node management
bubbaloop node list
bubbaloop node add user/repo # Add from GitHub
bubbaloop node build my-node # Build
bubbaloop node start my-node # Start service
bubbaloop node logs my-node -f # Follow logs
Node Lifecycle
# 1. Create a new node (generates SDK-based scaffold)
bubbaloop node init my-sensor --node-type rust
# Edit src/main.rs — implement Node trait (init + run)
# The SDK handles Zenoh session, health, schema, config, shutdown
# 2. Register with daemon
bubbaloop node add ./my-sensor
# 3. Build
bubbaloop node build my-sensor
# 4. Install as systemd service
bubbaloop node install my-sensor
# 5. Start
bubbaloop node start my-sensor
# 6. View logs
bubbaloop node logs my-sensor
YAML Skills (Zero-Code Sensors)
Skills are declarative YAML configs that map to nodes. Drop a YAML file in ~/.bubbaloop/skills/ and bubbaloop up auto-installs the driver node and starts it.
# ~/.bubbaloop/skills/front-camera.yaml
name: front-door
driver: rtsp
config:
url: rtsp://192.168.1.100/stream
# Load all skills, auto-install drivers, start nodes
bubbaloop up
AI Agent Integration (MCP)
Bubbaloop includes an MCP (Model Context Protocol) server — the sole control interface for AI agents. The daemon starts it automatically on port 8088.
# MCP over stdio (for Claude Code / local agents)
bubbaloop mcp --stdio
# MCP over HTTP (daemon mode, auto-started)
bubbaloop daemon
# → MCP server: http://127.0.0.1:8088/mcp
Available MCP tools:
| Tool | Description |
|---|---|
list_nodes |
List all nodes with status |
get_node_manifest |
Get a node's capabilities and topics |
send_command |
Send a command to a node |
install_node / uninstall_node |
Install or remove nodes |
start_node / stop_node |
Control node lifecycle |
get_node_logs |
Read node service logs |
discover_nodes |
Fleet-wide manifest discovery |
query_zenoh |
Query any Zenoh key expression |
42 MCP tools + agent-internal tools. Configure Claude Code via .mcp.json (already in project root).
Agent-internal tools (daemon-side only, not exposed via MCP): memory_search, memory_forget, schedule_task, create_proposal, read_file, write_file, run_command.
Architecture
┌──────────────────────────────────┐
│ AI Agent (Claude via MCP) │
│ http://127.0.0.1:8088/mcp │
└──────────────┬───────────────────┘
│
Dashboard (React) ─┬─ WebSocket ───┤─── Zenoh pub/sub
CLI ───────────────┘ │
│
Daemon ────────────────────────────┤
├─ Node Manager (lifecycle) │
├─ MCP Server (42 tools) │
├─ Telemetry Watchdog │
├─ Agent Runtime (multi-agent) │
└─ Systemd D-Bus (zbus) │
│
Nodes (self-describing) ───────────┘
├─ rtsp-camera Rust [H264 video, SHM raw frames]
├─ camera-object-detector Python [YOLO11 detection on SHM frames]
├─ camera-vlm Python [VLM scene description (planned)]
├─ system-telemetry Python [CPU, memory, disk, network]
├─ network-monitor Python [HTTP, DNS, ping health checks]
├─ openmeteo Python [weather: current, hourly, daily]
└─ custom... Rust/Py [your node here]
The daemon hosts the agent runtime (multi-agent Zenoh gateway) alongside the MCP server. Agents are configured via ~/.bubbaloop/agents.toml with per-agent identity and memory in ~/.bubbaloop/agents/{id}/. The CLI is a thin Zenoh client — all LLM processing runs daemon-side.
Per-agent features:
- Soul:
identity.md(personality) +capabilities.toml(model, heartbeat). Hot-reload on file change. - 4-Tier Memory: World State (live SQLite, sensor-derived) → RAM (current turn) → NDJSON (episodic, BM25 search) → SQLite (beliefs, jobs, proposals).
- Adaptive Heartbeat: Arousal-based decay — active agents check in frequently, idle agents stay quiet.
- Telemetry Watchdog: CPU/RAM/disk monitoring with circuit breakers and 5 severity levels.
Node Contract
Every node is self-describing with standard queryables:
{node}/schema → Protobuf FileDescriptorSet (binary)
{node}/manifest → Capabilities, topics, commands (JSON)
{node}/health → Status and uptime (JSON)
{node}/config → Current configuration (JSON)
{node}/command → Imperative actions (JSON request/response)
AI agents discover nodes via bubbaloop/**/manifest wildcard query, then interact through commands and data subscriptions.
Available Nodes
Official nodes live in bubbaloop-nodes-official. Install any node with bubbaloop node add.
Sensors
| Node | Type | Description | Topics |
|---|---|---|---|
| rtsp-camera | Rust | RTSP camera capture with hardware H264 decode via GStreamer | camera/{name}/compressed (global), camera/{name}/raw (SHM local) |
| system-telemetry | Python | CPU, memory, disk, network, and load metrics via psutil | system-telemetry/metrics |
| network-monitor | Python | HTTP, DNS, and ICMP ping health checks | network-monitor/status |
| openmeteo | Python | Open-Meteo weather data (current, 48h hourly, 7-day daily) | weather/current, weather/hourly, weather/daily |
Processors
| Node | Type | Description | Topics |
|---|---|---|---|
| camera-object-detector | Python | YOLO11 object detection on raw camera frames (SHM) | Subscribes {name}/raw (local), publishes {name}/detections |
| camera-vlm | Python | Vision language model scene description on camera frames (SHM) | Subscribes {name}/raw (local), publishes {name}/description |
All topics are prefixed with bubbaloop/global/{machine_id}/ (network-visible) or bubbaloop/local/{machine_id}/ (SHM-only).
Development
git clone https://github.com/kornia/bubbaloop.git
cd bubbaloop
pixi install
pixi run build # Build all
pixi run test # Run tests
pixi run daemon # Run daemon
pixi run dashboard # Run web dashboard
Development note: Linux + systemd is the supported production path. In Docker or
other environments without D-Bus/systemd, the daemon falls back to a native
process supervisor intended for development only. That fallback does not provide
journalctl-backed logs or full systemd parity.
Service Management
# View all services
systemctl --user list-units 'bubbaloop-*'
# Restart daemon
systemctl --user restart bubbaloop-daemon
# View logs
journalctl --user -u bubbaloop-daemon -f
These service-management commands apply to the systemd backend on Linux. They do
not apply to the development fallback used in Docker/non-systemd environments.
Troubleshooting
# Quick diagnostics
bubbaloop doctor
# Auto-fix common issues
bubbaloop doctor --fix
# JSON output for scripting
bubbaloop doctor --json
Common issues:
- Zenoh timeout: Check
pgrep zenohd, restart if missing - Build fails: Check
bubbaloop node logs <name>for errors - Auth failed: Run
bubbaloop login --statusto check credentials
Documentation
- Quickstart: See docs/getting-started/quickstart.md
- Agent guide: See docs/agent-guide.md for multi-agent setup and MCP tools
- Architecture: See ARCHITECTURE.md for design decisions
- Roadmap: See ROADMAP.md for what's next
- Coding standards: See CLAUDE.md for conventions
- CLI reference:
bubbaloop --helporbubbaloop node --help
License
Apache-2.0
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi