LoopForge

mcp
Guvenlik Denetimi
Basarisiz
Health Uyari
  • No license — Repository has no license file
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 7 GitHub stars
Code Basarisiz
  • rm -rf — Recursive force deletion command in .github/workflows/release-dry-run.yml
  • rm -rf — Recursive force deletion command in .github/workflows/release.yml
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
LoopForge is a Rust-based Agent OS designed to manage long-running, autonomous AI workflows. It features persistent memory, tool sandboxing, and multi-provider LLM routing for automated software delivery.

Security Assessment
The tool's core functionality involves executing autonomous AI agents, which intrinsically requires executing shell commands and making network requests to interact with LLM providers like Ollama or OpenAI. There are no hardcoded secrets detected, and the tool does not request broadly dangerous permissions. However, the rule-based scan failed due to recursive force deletion commands (`rm -rf`) hardcoded inside its CI/CD workflow files (`.github/workflows/release.yml` and `release-dry-run.yml`). While often standard for build cleanup, the presence of unsafe shell execution flags in automated pipelines raises operational security concerns. Overall risk is rated as Medium.

Quality Assessment
The project appears to be in its very early stages of development. It has extremely low community visibility and engagement, evidenced by only 7 GitHub stars and the complete absence of a license file. Using or contributing to unlicensed code poses significant legal and copyright risks. On a positive note, the repository is actively maintained, with the most recent push occurring today. It also provides comprehensive documentation for users and maintainers alike.

Verdict
Use with caution — adopt only if you thoroughly audit the Rust source code yourself, keeping in mind the missing license and early-stage CI/CD security warnings.
SUMMARY

Open-source Agent OS in Rust for long-running autonomous workflows: memory, tool sandboxing, and multi-provider LLM routing.

README.md

LoopForge

English | 简体中文

LoopForge is a personal AI engineer for software delivery: long-running workflows, persistent memory, tool sandboxing, and model routing with harness checkpoints.

Brand update

  • Public product name: LoopForge
  • Primary CLI command: loopforge
  • Runtime data path remains ~/.loopforge for now.

Documentation

  • Docs site: https://os.rexai.top
  • (If the custom domain isn’t configured yet) GitHub Pages: https://rexleimo.github.io/LoopForge/
  • 5-minute outcomes: docs-site/tutorials/five-minute-outcomes.md
  • Positioning: docs-site/explanation/why-loopforge.md
  • Runtime architecture: docs-site/explanation/runtime-architecture.md
  • Internal maintainer map: docs/internal/runtime-module-map.md

Status

This repository is bootstrapped with a long-running harness (features.json, init.sh, loopforge-progress.md). Work is tracked by flipping feature passes from falsetrue.

Install

Option A: Download a prebuilt binary (recommended)

Download the archive for your OS from GitHub Releases, extract it, and put loopforge (or loopforge.exe) somewhere on your PATH.

Option B: Build from source

# Install to ~/.cargo/bin (recommended for dev)
cargo install --path crates/loopforge-cli --locked
loopforge --help

# Or build a local binary
cargo build --release -p loopforge-cli
./target/release/loopforge --help

Quick start (recommended)

For repository-wide verification during development:

./init.sh

For the best first user run, use onboard:

# 1) Start Ollama
ollama serve

# 2) Run one-command onboarding
loopforge onboard --workspace loopforge-onboard-demo

Useful variations:

# setup-only path (skip first task)
loopforge onboard --workspace loopforge-onboard-demo --skip-agent

# use a more useful starter task
loopforge onboard --workspace loopforge-onboard-demo --starter workspace-brief

After onboarding, LoopForge writes:

  • loopforge-onboard-demo/.loopforge/onboard-report.json
  • loopforge-onboard-demo/.loopforge/onboard-report.md

Run with Ollama (OpenAI-compatible)

LoopForge defaults to ollama at http://127.0.0.1:11434/v1 in ~/.loopforge/config.toml.

# 1) Start Ollama
ollama serve

# 2) Init LoopForge (optional if you use onboard)
loopforge init

# 3) Run an agent session in a workspace directory
mkdir -p /tmp/loopforge-work
loopforge agent run --workspace /tmp/loopforge-work --prompt "Create hello.txt with the word hi"

To run the optional Ollama smoke test: LOOPFORGE_OLLAMA_MODEL=<your-model> cargo test --workspace --test ollama_smoke -- --ignored.
To run the optional NVIDIA NIM smoke test: NVIDIA_API_KEY=<key> cargo test --workspace --test nvidia_nim_smoke -- --ignored.

Releasing (maintainers)

Pushing a v* tag still triggers the Release workflow which attaches prebuilt archives to a GitHub Release.
On main, maintainers usually do not need to push the tag manually anymore: once CI succeeds and the workspace version/changelog are ready, the Auto Release Tag workflow creates the missing vX.Y.Z tag automatically and then the existing Release workflow publishes the GitHub release.
Before every release, follow the versioning/changelog policy in docs/versioning-and-release.md.
If an iteration is marked as "needs version bump", the same change set must include both version number updates and changelog updates (CHANGELOG.md).

Manual fallback:

git tag v1.0.0
git push origin v1.0.0

Providers & routing

LoopForge supports multiple LLM providers via drivers:

  • openai_compatible (Ollama / DeepSeek / Kimi / Qwen / GLM / MiniMax / NVIDIA NIM / OpenAI-compatible gateways)
  • dashscope_native (Alibaba DashScope Generation API / Qwen native)
  • zhipu_native (Zhipu GLM native auth/token handling)
  • minimax_native (MiniMax native text/chatcompletion_v2 API)
  • anthropic (Claude API + Anthropic-compatible gateways)
  • gemini (Google Gemini API)

~/.loopforge/config.toml defines providers and routes each task kind to a (provider, model) pair:

[providers.ollama]
kind = "openai_compatible"
base_url = "http://127.0.0.1:11434/v1"
api_key_env = ""
default_model = "llama3.2"

[providers.deepseek]
kind = "openai_compatible"
base_url = "https://api.deepseek.com"
api_key_env = "DEEPSEEK_API_KEY"
default_model = "deepseek-chat"

[router.coding]
provider = "ollama"
model = "default" # uses providers.<name>.default_model

To switch providers, set the provider's api_key_env (if needed) and update [router.*] to point at the provider you want. If you keep model = "default", LoopForge uses providers.<name>.default_model.

Built-in presets include:

  • deepseek (OpenAI-compatible)
  • kimi / kimi_cn (OpenAI-compatible)
  • qwen / qwen_cn / qwen_sg (OpenAI-compatible)
  • qwen_native / qwen_native_cn / qwen_native_sg (DashScope native API)
  • glm / glm_native (OpenAI-compatible / Zhipu native)
  • minimax / minimax_native (OpenAI-compatible / MiniMax native)
  • nvidia (OpenAI-compatible / NVIDIA NIM)
  • minimax_anthropic (Anthropic-compatible gateway)

Yorumlar (0)

Sonuc bulunamadi