deliberate-reasoning-engine

mcp
Guvenlik Denetimi
Uyari
Health Uyari
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 5 GitHub stars
Code Gecti
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
This MCP server transforms linear AI reasoning into structured, auditable thought graphs. It allows language models to externalize their problem-solving process into a directed acyclic graph (DAG) with semantic categorization and dependency tracking.

Security Assessment
The overall risk is Low. A code scan of 12 files found no dangerous patterns, hardcoded secrets, or requests for dangerous permissions. Based on the repository structure, the tool runs locally as a Node.js script to process reasoning data and does not appear to execute arbitrary shell commands or make external network requests. It does not inherently access sensitive user data beyond the text arguments passed to it by the AI for logging.

Quality Assessment
The project is relatively new but well-structured and actively maintained, with its most recent push occurring today. It uses modern TypeScript and is clearly documented with easy installation instructions for environments like Claude Desktop. However, community trust and visibility are currently very low. With only 5 GitHub stars, the tool has not yet been widely vetted by a large audience, meaning bugs or edge cases might still be undiscovered. It is properly licensed under the standard MIT license, making it safe for personal and commercial use.

Verdict
Safe to use, though developers should expect an early-stage tool with limited community testing.
SUMMARY

MCP server that transforms linear AI reasoning into structured, auditable thought graphs

README.md

Deliberate Reasoning Engine (DRE)

npm version
License: MIT
TypeScript
MCP Compatible

A Model Context Protocol (MCP) server that transforms linear AI reasoning into structured, auditable thought graphs. DRE enables Language Models to externalize their reasoning process as a directed acyclic graph (DAG) with semantic thought types, dependencies, and validation.

🌟 Features

  • 🧠 Semantic Thought Types: Categorize thoughts as Objectives, Hypotheses, Assumptions, Questions, Evidence, Actions, Synthesis, and Critiques
  • 🔗 Graph-Based Dependencies: Build a DAG of thoughts with explicit relationships and dependencies
  • 🚨 Assumption Tracking: Monitor and invalidate assumptions with automatic cascade to dependent thoughts
  • 📊 Hypothesis Scoring: Track supporting and contradicting evidence (coming soon)
  • 💾 Session Persistence: Save and load reasoning sessions (coming soon)
  • ✅ Graph Validation: Detect cycles, contradictions, and orphaned thoughts
  • 🎯 Focused Reasoning: Keep LLMs on track with structured problem decomposition

📦 Installation

As an MCP Server

npm install -g deliberate-reasoning-engine

For Development

git clone https://github.com/haasonsaas/deliberate-reasoning-engine.git
cd deliberate-reasoning-engine
npm install
npm run build

🚀 Quick Start

Configure with Claude Desktop

Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json):

{
  "mcpServers": {
    "dre": {
      "command": "npx",
      "args": ["deliberate-reasoning-engine"]
    }
  }
}

Or use the local development version:

{
  "mcpServers": {
    "dre": {
      "command": "node",
      "args": ["/absolute/path/to/dre/dist/index.js"]
    }
  }
}

Restart Claude Desktop, and you'll see the DRE tools available in the 🔧 menu.

🛠️ Available Tools

log_thought

Log a structured thought with semantic type and dependencies.

Parameters:

  • thought (string, required): The content of the thought
  • thought_type (enum, required): One of:
    • objective: The overall goal of the reasoning task
    • hypothesis: A proposed explanation or solution
    • assumption: A belief taken as true for this reasoning line
    • question: A point of uncertainty to resolve
    • sub_problem: Decomposition of a larger problem
    • evidence: Data from tools or prior knowledge
    • action: A plan to use a tool
    • synthesis: A conclusion from previous thoughts
    • critique: Self-correction or flaw identification
  • dependencies (string[], optional): IDs of thoughts this depends on
  • confidence (number 0-1, optional): Confidence level
  • action_request (object, optional): Tool and parameters to execute

get_thought_graph

Retrieve the current reasoning graph.

Parameters:

  • format (enum, optional): "full" or "summary" (default: "summary")

invalidate_assumption

Mark an assumption as invalid, cascading to all dependent thoughts.

Parameters:

  • thought_id (string, required): ID of the assumption to invalidate
  • reason (string, required): Explanation for invalidation

📖 Example Usage

Here's how an LLM might use DRE to analyze a complex decision:

// 1. Set the objective
const objective = await use_mcp_tool("dre", "log_thought", {
  thought: "Should we acquire Company X?",
  thought_type: "objective"
});

// 2. Form hypotheses
const hyp1 = await use_mcp_tool("dre", "log_thought", {
  thought: "Acquiring Company X will increase our market share by 20%",
  thought_type: "hypothesis",
  dependencies: [objective.thought_id],
  confidence: 0.7
});

// 3. Identify assumptions
const assumption = await use_mcp_tool("dre", "log_thought", {
  thought: "Company X's technology is compatible with our stack",
  thought_type: "assumption",
  dependencies: [hyp1.thought_id],
  confidence: 0.8
});

// 4. Break down into sub-problems
const subproblem = await use_mcp_tool("dre", "log_thought", {
  thought: "Verify technical compatibility through due diligence",
  thought_type: "sub_problem",
  dependencies: [assumption.thought_id]
});

// 5. If assumption proves false, invalidate it
await use_mcp_tool("dre", "invalidate_assumption", {
  thought_id: assumption.thought_id,
  reason: "Technical audit revealed major incompatibilities"
});
// This automatically marks the sub-problem and any dependent thoughts as stale

🏗️ Architecture

DRE models reasoning as a directed acyclic graph where:

  • Nodes are thoughts with semantic types
  • Edges represent dependencies between thoughts
  • Status tracking (active/stale) enables dynamic reasoning updates
  • Cascade invalidation ensures reasoning consistency

🤝 Use Cases

  • Strategic Decision Making: Break down complex business decisions
  • Research Planning: Structure research questions and hypotheses
  • Problem Solving: Decompose problems into manageable sub-problems
  • Risk Analysis: Track assumptions and their implications
  • Debugging: Systematic root cause analysis
  • Learning: Structured exploration of new topics

🔧 Development

# Install dependencies
npm install

# Build TypeScript
npm run build

# Run in development mode
npm run dev

# Run tests
npm test

📝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

🐛 Troubleshooting

Common Issues

  1. "Server not found" in Claude Desktop

    • Ensure the path in your config is absolute
    • Restart Claude Desktop after config changes
  2. "Cannot find module" errors

    • Run npm install and npm run build
    • Check that you're using Node.js 18+

📄 License

MIT - see LICENSE for details.

🙏 Acknowledgments

  • Built on the Model Context Protocol
  • Inspired by structured reasoning systems and cognitive architectures
  • Thanks to Anthropic for Claude and the MCP specification

🚦 Roadmap

  • Hypothesis scoring based on evidence
  • Session persistence and resumption
  • Graph visualization export
  • Conflict detection between branches
  • Integration with external reasoning tools
  • Multi-agent reasoning support

📊 Stats

GitHub stars
GitHub forks
GitHub issues

Yorumlar (0)

Sonuc bulunamadi