ocpipe
skill
Uyari
Health Uyari
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 5 GitHub stars
Code Gecti
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
Purpose
This TypeScript library enables developers to build structured LLM pipelines with type-safe inputs and outputs using Zod schemas. It integrates with OpenCode and Claude Code to orchestrate multi-step AI workflows with checkpointing and auto-correction.
Security Assessment
The tool acts as an orchestration layer over external LLM agent SDKs and CLIs. It relies on executing shell commands (invoking the OpenCode or Claude Code binaries) to function, which is its intended behavior but requires awareness. The light code scan of 12 files found no dangerous patterns, no hardcoded secrets, and no suspicious network requests to unknown endpoints. Any sensitive data handling would depend on the prompts you pass through it and the LLM providers you configure. Overall risk: Low.
Quality Assessment
The project is very new and has minimal community traction, reflected by only 5 GitHub stars. However, it shows active development (last push was today) and carries a clean MIT license. The codebase is small and readable. A notable concern is that the default backend requires a specific fork of OpenCode rather than the official release, meaning you are depending on an unofficial binary until certain upstream PRs are merged. This adds a layer of supply-chain fragility. TypeScript typing and the use of Zod for validation are positive indicators of code quality.
Verdict
Use with caution — the code itself appears safe and well-structured, but the heavy reliance on unofficial CLI forks and the project's extremely early stage make it better suited for experimentation than for critical production pipelines.
This TypeScript library enables developers to build structured LLM pipelines with type-safe inputs and outputs using Zod schemas. It integrates with OpenCode and Claude Code to orchestrate multi-step AI workflows with checkpointing and auto-correction.
Security Assessment
The tool acts as an orchestration layer over external LLM agent SDKs and CLIs. It relies on executing shell commands (invoking the OpenCode or Claude Code binaries) to function, which is its intended behavior but requires awareness. The light code scan of 12 files found no dangerous patterns, no hardcoded secrets, and no suspicious network requests to unknown endpoints. Any sensitive data handling would depend on the prompts you pass through it and the LLM providers you configure. Overall risk: Low.
Quality Assessment
The project is very new and has minimal community traction, reflected by only 5 GitHub stars. However, it shows active development (last push was today) and carries a clean MIT license. The codebase is small and readable. A notable concern is that the default backend requires a specific fork of OpenCode rather than the official release, meaning you are depending on an unofficial binary until certain upstream PRs are merged. This adds a layer of supply-chain fragility. TypeScript typing and the use of Zod for validation are positive indicators of code quality.
Verdict
Use with caution — the code itself appears safe and well-structured, but the heavy reliance on unofficial CLI forks and the project's extremely early stage make it better suited for experimentation than for critical production pipelines.
structured prompts ft. Zod
README.md
ocpipe
Build LLM pipelines with OpenCode, Claude Code, and Zod.
Inspired by DSPy.
- Type-safe Define inputs and outputs with Zod schemas
- Modular Compose modules into complex pipelines
- Checkpoints Resume from any step
- Multi-backend Choose between OpenCode (75+ providers) or Claude Code SDK
- Auto-correction Fixes schema mismatches automatically
Quick Start
bun add ocpipe
import { signature, field, module, Pipeline, createBaseState } from 'ocpipe'
const Greet = signature({
doc: 'Generate a friendly greeting for the given name.',
inputs: { name: field.string('The name of the person to greet') },
outputs: { greeting: field.string('A friendly greeting message') },
})
const pipeline = new Pipeline(
{
name: 'hello-world',
defaultModel: { providerID: 'opencode', modelID: 'minimax-m2.1-free' },
defaultAgent: 'default',
},
createBaseState,
)
const result = await pipeline.run(module(Greet), { name: 'World' })
console.log(result.data.greeting)
// Extract types from signatures
import { InferInputs, InferOutputs } from 'ocpipe'
type GreetIn = InferInputs<typeof Greet> // { name: string }
type GreetOut = InferOutputs<typeof Greet> // { greeting: string }
Backends
ocpipe supports two backends for running LLM agents:
OpenCode (default) - Requires opencode CLI in your PATH. Supports 75+ providers.
const pipeline = new Pipeline(
{
name: 'my-pipeline',
defaultModel: {
providerID: 'anthropic',
modelID: 'claude-sonnet-4-20250514',
},
defaultAgent: 'default',
},
createBaseState,
)
Claude Code - Uses @anthropic-ai/claude-agent-sdk. Install as a peer dependency.
// modelID: 'opus', 'sonnet', or 'haiku'
defaultModel: { backend: 'claude-code', modelID: 'sonnet' },
// permissionMode: 'default' | 'acceptEdits' | 'bypassPermissions' | 'plan'
claudeCode: { permissionMode: 'acceptEdits' },
Requirements
For OpenCode backend: Currently requires this OpenCode fork. Once the following PRs are merged, the official release will work:
For Claude Code backend: Install the SDK as a peer dependency:
bun add @anthropic-ai/claude-agent-sdk
Documentation
- Getting Started - Tutorial with examples
- Design - Architecture and concepts
- Contributing - Development setup
An Aperture Robotics project.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi