neurodivergent-memory
Health Uyari
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 6 GitHub stars
Code Uyari
- fs module — File system access in add-checks.cjs
- fs module — File system access in apply-fixes.cjs
Permissions Gecti
- Permissions — No dangerous permissions requested
This MCP server provides a persistent memory and knowledge graph system for AI models. It organizes information into specific knowledge domains using a "district" model and relies on BM25 semantic ranking to search and retrieve stored memories via bidirectional connections.
Security Assessment
Overall risk: Low. The tool requires no dangerous permissions to operate. File system access is present but isolated to two local build-related scripts (`add-checks.cjs` and `apply-fixes.cjs`), which is typical behavior for Node.js packages rather than a red flag. There is no evidence of hardcoded secrets, unauthorized network requests, or unexpected shell command execution. The MIT license further ensures there are no restrictive or ambiguous legal traps in the code. Because it acts as a local memory store, the primary sensitive data it accesses is the text you explicitly feed it, which remains on your machine.
Quality Assessment
The project appears to be in active, early-stage development. It was updated very recently (pushed to today) but has extremely low community visibility with only 6 stars, meaning it has not yet been widely battle-tested or peer-reviewed by a large audience. However, the developer has taken several positive steps indicative of good quality, such as providing an MIT license, offering clear documentation, tracking Node.js LTS compatibility, and distributing official Docker containers.
Verdict
Use with caution — the code itself appears safe and well-documented, but its lack of widespread community adoption makes it better suited for local experimentation rather than production environments.
A TypeScript-based MCP server implementing a memory system inspired by neurodivergent cognitive styles. It organizes thoughts into five districts (knowledge domains), ranks search results using BM25 semantic ranking, and stores memories as a persistent knowledge graph with bidirectional connections.
neurodivergent-memory MCP Server
📽️ Click to preview
|
Project Preview This is a Model Context Protocol server for knowledge graphs designed around neurodivergent thinking patterns. This TypeScript-based MCP server implements a memory system inspired by neurodivergent cognitive styles. It organizes thoughts into five districts (knowledge domains), ranks search results using BM25 semantic ranking, and stores memories as a persistent knowledge graph with bidirectional connections.
|
Quick-start
Windows
# Download and install Chocolatey:
powershell -c "irm https://community.chocolatey.org/install.ps1|iex"
# Download and install Node.js:
choco install nodejs --version="24.14.1"
# Verify the Node.js version:
node -v # Should print a Node.js 24.x version.
# Verify npm version:
npm -v # Should print an npm 11.x version.
# Run the packaged neurodivergent-memory CLI without a global install
npx neurodivergent-memory@latest init-agent-kit
Linux/macOS
# Download and install nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.4/install.sh | bash
# in lieu of restarting the shell
. "$HOME/.nvm/nvm.sh"
# Download and install Node.js:
nvm install 24
# Verify the Node.js version:
node -v # Should print a Node.js 24.x version.
# Verify npm version:
npm -v # Should print an npm 11.x version.
# Run the packaged neurodivergent-memory CLI without a global install
npx neurodivergent-memory@latest init-agent-kit
Model Flow
flowchart LR
A[Client MCP Request] --> B[MCP Server Stdio Transport]
B --> C{Request Type}
C -->|Tools| D[Tool Handler]
C -->|Resources| E[Resource Handler]
C -->|Prompts| F[Prompt Handler]
D --> G[NeurodivergentMemory Core]
E --> G
F --> G
G --> H[Memory Graph Store]
G --> I[BM25 Index]
H --> J[Persisted JSON Snapshot]
D --> K[MCP JSON Response]
E --> K
F --> K
K --> A
Flow notes:
- Memory operations update both graph state and BM25 index.
- Persistence writes to the local snapshot file for restart continuity.
- All MCP responses return through stdio transport.
Features
Five Memory Districts
Memories are organized by cognitive domain:
- logical_analysis — Structured thinking, problem solving, and analytical processes
- emotional_processing — Feelings, emotional responses, and affective states
- practical_execution — Action-oriented thoughts, tasks, and implementation
- vigilant_monitoring — Awareness, safety concerns, and protective thinking
- creative_synthesis — Novel connections, creative insights, and innovative thinking
Resources
- Explore memory districts and individual memories via
memory://URIs - Each memory includes content, tags, emotional metadata, and connection information
- Access memories as JSON resources with full metadata
Tools
store_memory— Create new memory nodes with optional emotional valence and intensityretrieve_memory— Fetch a specific memory by IDupdate_memory— Modify content, tags, district, emotional_valence, intensity, or project attributiondelete_memory— Remove a memory and all its connectionsconnect_memories— Create bidirectional edges between memory nodessearch_memories— BM25-ranked semantic search with optional goal context, recency bias, and filters (district, project_id, tags, epistemic status, emotional valence, intensity, min_score)traverse_from— Graph traversal up to N hops from a starting memoryrelated_to— Find memories by graph proximity + BM25 semantic blend, with optional goal context and epistemic-status filterslist_memories— Paginated listing with optional district/archetype/project_id/epistemic-status filtersmemory_stats— Aggregate statistics (totals, per-district/per-project counts, most-accessed, orphans) with optional project scopeserver_handshake— Return runtime server identity/version details for explicit client-side version confirmationstorage_diagnostics— Show the resolved snapshot path, WAL path, and effective persistence source in one responseimport_memories— Bulk-import from inline JSON entries or a snapshotfile_path, withdry_run, dedupe policies, and explicit snapshot migration flagsprepare_memory_city_context— Tool mirror ofexplore_memory_cityfor clients that support tools but do not invoke MCP promptsprepare_synthesis_context— Tool mirror ofsynthesize_memoriesfor prompt-limited clientsprepare_packetized_synthesis_context— Tool mirror ofsynthesize_memory_packetsfor prompt-limited or attachment-constrained clients
Prompts
explore_memory_city— Guided exploration of districts and memory organizationsynthesize_memories— Create new insights by connecting existing memoriessynthesize_memory_packets— Packetized synthesis prompt for attachment-constrained clients; emits one coverage manifest plus bounded memory slices that summarize the broader graph
Use synthesize_memories when the MCP client can comfortably consume many raw memory resources. Use synthesize_memory_packets when the caller path is attachment-constrained or when you need broader graph coverage in a small number of structured resources.
For maximum interoperability across MCP clients, the server exposes the same synthesis/exploration context in two forms:
- Prompts via
prompts/list+prompts/getfor clients that implement MCP prompt invocation. - Tools via the
prepare_*_contexttools for clients that support MCP tools but ignore or under-support prompts.
Some clients, such as Cline, expose MCP prompts as namespaced slash commands in the form /mcp:<server-name>:<prompt-name> rather than /<prompt-name>.
Core Concepts
Memory Archetypes
Each memory is assigned an archetype tied to its district:
- scholar — logical_analysis
- merchant — practical_execution
- mystic — emotional_processing and creative_synthesis
- guard — vigilant_monitoring
Semantic Ranking
Search uses Okapi BM25 ranking (k1=1.5, b=0.75) without requiring embeddings or cloud calls. Results are normalized to 0–1 score range.
Emotional Metadata
Each memory can optionally carry:
- emotional_valence (-1 to 1) — Emotional charge or affective tone
- intensity (0–1) — Mental energy or importance weight
Epistemic Status
Memories can optionally carry epistemic_status to distinguish tentative planning from validated knowledge.
draft— provisional or planning-orientedvalidated— confirmed and safe to treat as establishedoutdated— superseded but retained for history
When store_memory or import_memories creates a new practical_execution memory without an explicit epistemic_status, the server defaults it to draft if the memory has a task tag. The canonical task tag is kind:task, and the server also accepts the compatibility synonyms type:task and bare task. This keeps planning notes from silently presenting as settled fact.
Project Attribution and Scoped Retrieval
Memories can optionally include a first-class project_id for attribution and scoped retrieval across multi-project graphs.
project_idis optional on writes (store_memory,update_memory,import_memories).update_memoryacceptsproject_id: nullto clear existing project attribution.search_memories,list_memories, andmemory_statsaccept an optionalproject_idfilter.search_memories,list_memories, andrelated_toaccept optionalepistemic_statusesfilters so callers can avoid stale planning memories when appropriate.search_memoriesaccepts optionalcontextandrecency_weightparameters. Context is blended into ranking as a lightweight BM25 boost;recency_weightmust be between0and1and adds a recency boost without replacing semantic relevance.search_memoriesacceptsmin_intensity/max_intensityas the preferred intensity filter names. The legacyintensity_min/intensity_maxaliases remain supported for compatibility.related_toaccepts an optionalcontextparameter to bias related-memory ranking toward the caller's current goal.- Stats now include a
perProjectbreakdown. - Scoped
memory_statsreportstotalConnectionsonly for edges where both endpoints are in scope. list_memoriesincludes aproject: ...segment in each line (unsetwhen no project attribution exists).- Validation contract:
project_idmust match^[A-Za-z0-9][A-Za-z0-9._:-]{0,63}$(max length 64). - Invalid values return stable error code
NM_E020with recovery guidance.
Import Diagnostics and Migration Semantics
storage_diagnostics reports the resolved snapshot path, the WAL path, and which configuration source won the persistence-path precedence check.
import_memories supports two source modes:
- Inline
entriesfor ordinary bulk seeding. file_pathfor server snapshot imports, avoiding large MCP payloads.
Import validation flags:
dry_run: truevalidates the request without writing data and returns deterministicwould_import,would_skip, andwould_failcounts.dedupeacceptsnone,content_hash, orcontent_plus_tags.- Deduplicated rows are reported with stable reason codes:
DEDUPE_CONTENT_HASHorDEDUPE_CONTENT_PLUS_TAGS. - Snapshot
file_pathimports accept.jsonfiles under the resolved persistence directory by default. SetNEURODIVERGENT_MEMORY_IMPORT_ALLOW_EXTERNAL_FILE=trueonly when importing external snapshot files intentionally.
Snapshot migration flags:
preserve_idsis only valid withfile_path; any ID collision with the live store is rejected deterministically.merge_connectionsis only valid withfile_path; every referenced connection target must exist either in the imported snapshot or the live store, or the row fails validation withINVALID_CONNECTION_TARGET.- If validation failures are present, the non-dry-run import is rejected as a whole. Run
dry_run: truefirst to inspect the failure list before retrying.
Knowledge Graph Persistence
Memories are persisted with a write-ahead journal (WAL) plus snapshot model:
- Every mutating operation appends to
memories.json.wal.jsonlfirst. - The in-memory graph is then updated and periodically snapshotted to
memories.json. - On startup, the server loads
memories.json, replays WAL entries, compacts to a fresh snapshot, then truncates the WAL.
This improves crash recovery behavior compared to snapshot-only persistence.
For explicit control, set one of these environment variables:
NEURODIVERGENT_MEMORY_DIRto choose the directory that containsmemories.jsonNEURODIVERGENT_MEMORY_FILEto point at a specific snapshot fileNEURODIVERGENT_MEMORY_MAXto cap total memories (integer; default unlimited)NEURODIVERGENT_MEMORY_EVICTIONto choose eviction policy when max is reached:lru(default)access_frequencydistrict_priority
Mounts at /home/node/.neurodivergent-memory continue to work without any env override — that is the container's node user home and is checked automatically.
⚠️ Breaking change (v0.2.0): The image runs as the
nodeuser and cannot read/root, so previous mounts at/root/.neurodivergent-memoryare silently skipped. Agents may appear to have lost all memories. See Recovering memories after upgrade below.
Recovering memories after upgrade
If you previously mounted data at /root/.neurodivergent-memory, your snapshot is still intact on the host volume. Re-mount it using one of these options:
Option A — explicit /data mount (recommended):
"-e", "NEURODIVERGENT_MEMORY_DIR=/data",
"-v", "mydata:/data"
Option B — mount at the path the node user already owns:
"-v", "mydata:/home/node/.neurodivergent-memory"
No NEURODIVERGENT_MEMORY_DIR override is needed for option B — the server finds the existing snapshot automatically.
For agents: if memories appear missing after upgrading the container, use import_memories to reload from a backup export, or ask your AI assistant to re-run memory_stats after the volume is remounted correctly to confirm restoration.
Multi-Tier Memory Persistence
The server supports a three-tier memory architecture for agents that work across multiple projects. Each tier
lives in its own directory and can be synced independently.
| Tier | Purpose | Typical path | Env var |
|---|---|---|---|
| project | Repo-scoped memories — ephemeral, CI-friendly | .github/agent-kit/memories |
NEURODIVERGENT_MEMORY_PROJECT_DIR |
| user | Cross-project personal knowledge — durable, per-developer | ~/.neurodivergent-memory |
NEURODIVERGENT_MEMORY_USER_DIR |
| org | Shared organisational knowledge — optional, team-wide | any shared mount | NEURODIVERGENT_MEMORY_ORG_DIR |
The primary server still reads its active snapshot from NEURODIVERGENT_MEMORY_DIR (or the auto-discovered
default). Tier variables are used exclusively by the sync-memories helper.
Tagging memories for sync
Add a persistence:durable tag to any memory that should be promoted to the user or org tier. Memories
without this tag are treated as ephemeral and stay in the project tier.
["topic:typescript", "scope:global", "kind:pattern", "layer:architecture", "persistence:durable"]
Use persistence:ephemeral as an explicit opt-out for memories you never want promoted.
Syncing memories between tiers
After a build, milestone, or session — promote durable memories from the project tier to the user tier:
NEURODIVERGENT_MEMORY_PROJECT_DIR=.github/agent-kit/memories \
NEURODIVERGENT_MEMORY_USER_DIR=~/.neurodivergent-memory \
npm run sync-memories -- --from project --to user
Or use explicit paths:
node build/scripts/sync-memories.js \
--from .github/agent-kit/memories \
--to ~/.neurodivergent-memory
Full option reference:
--from <path|tier> Source snapshot directory, or tier name: project | user | org
--to <path|tier> Target snapshot directory, or tier name: project | user | org
--tags <tag1,tag2,...> Promote only memories matching ALL listed tags (default: persistence:durable)
--any-tag Match memories that have ANY of the listed tags (OR logic)
--dry-run Report counts without writing any data
Safety note: stop the MCP server for the target tier before running sync — the script writes directly to
the snapshot file and will warn if it detects an open WAL for the target directory.
Release Security
- GitHub Actions runs on Node.js 24 LTS for CI and release automation
- npm publishes use OIDC provenance with
npm publish --provenance --access public - Docker images are built with Buildx, published to Docker Hub, and emitted with SBOM and provenance metadata
- GitHub Actions generates artifact attestations for the npm tarball and the pushed container image digest
- Tagged releases upload the npm tarball, checksums, and attestation bundles as release assets
Development RC Channel
Pushes to the development branch publish release candidates using the same npm package name (neurodivergent-memory) and container repositories.
- npm prereleases are published as
0.x.x-rc.Nwith npm dist-tagrc. - npm prerelease suffix
Nusesrun_number.run_attemptto avoid collisions on workflow re-runs. - Docker images are published with immutable
rc-0.x.x-rc.Ntags only, whereNis derived fromrun_number.run_attempt. - GitHub releases for RC builds are marked as pre-release.
These builds are intentionally less stable than the research preview line and should be used only for validation and early integration testing.
Live Readiness Smoke (project_id)
Use the deterministic live smoke harness to validate project_id attribution/scoped retrieval end-to-end:
- Local build target:
npm run smoke:project-id
- Latest Docker RC target (PowerShell):
$rc = (Invoke-RestMethod -Uri "https://hub.docker.com/v2/repositories/twgbellok/neurodivergent-memory/tags?page_size=25").results |
Where-Object { $_.name -match '^rc-' } |
Sort-Object { $_.last_updated } -Descending |
Select-Object -First 1 -ExpandProperty name
node test/live-project-id-smoke.mjs "docker run --rm -i twgbellok/neurodivergent-memory:$rc"
The smoke harness exits non-zero on failed assertions and is suitable as a release-readiness gate.
Error Contract
Mutating and lookup tool failures are returned with a stable operator-facing shape embedded in the text response:
❌ <summary>
Code: NM_EXXX
Message: Human-readable failure summary
Recovery: Suggested next action
The leading summary line is contextual, while the Code/Message/Recovery block remains stable for operators to parse and search. This keeps MCP responses readable in chat clients while giving operators a stable code they can search in logs and release notes. Structured logs are written with Pino to stderr and include the same code field on known failure paths.
Concurrency Safety
Mutating tools are serialized through an async mutex to prevent concurrent write races when multiple agents call the server at the same time.
Write queue behavior:
- Pending write operations are bounded by
NEURODIVERGENT_MEMORY_QUEUE_DEPTH(default:50). - When the queue is full, mutating tools return
NM_E010with a retry-oriented recovery message. - Queue high-water/clear transitions are logged with structured Pino warnings.
WIP guardrail behavior:
store_memorychecks practical in-progress task saturation peragent_idwhen task tags include in-progress markers.- The cap is controlled by
NEURODIVERGENT_MEMORY_WIP_LIMIT(default:1; set0to disable). - Exceeding the cap emits a warning line in the tool response and logs
NM_E011for operator visibility.
Loop Telemetry And Guardrails
The server tracks loop signals and can surface targeted guardrail responses:
- Repetition detection on
store_memorycompares incoming content against the 10 most recent memories (sameagent_idwhen provided) using tokenizer-consistent token-overlap scoring with an exact-match fast path. - Stores that meet the repeat threshold set
repeat_detected: true, incrementrepeat_write_counton the matched memory, and add aNo net-new infowarning to the tool response. - Repeated
logical_analysisreads ofemotional_processingmemories add adistill_memorysuggestion once the configured threshold is crossed. - Read/write ping-pong transitions are tracked in a rolling operation window, increment
ping_pong_counterwhen threshold conditions are met, and can optionally start a temporary cross-district write cooldown. memory_statsnow includes aloop_telemetryblock with:repeat_write_candidates(top 5)ping_pong_candidates(top 5)recent_high_similarity_writes(last 5)
Configuration:
NEURODIVERGENT_MEMORY_REPEAT_THRESHOLD(default:0.85)NEURODIVERGENT_MEMORY_LOOP_WINDOW(default:20)NEURODIVERGENT_MEMORY_PING_PONG_THRESHOLD(default:3)NEURODIVERGENT_MEMORY_DISTILL_SUGGEST_THRESHOLD(default:3)NEURODIVERGENT_MEMORY_CROSS_DISTRICT_COOLDOWN_MS(default:0, disabled)
Performance Benchmark Baseline
Issue #19 adds a deterministic benchmark harness for end-to-end MCP stdio measurements against the built server.
Run it with:
npm run benchmark
The benchmark:
- Uses an isolated temp persistence directory so it does not mutate your local memory graph.
- Seeds each dataset tier, then measures
store_memorythroughput across 100 writes at the target tier. - Measures
search_memoriesandlist_memorieslatency over 100 iterations at 1k, 5k, and 10k memories. - Measures
traverse_fromlatency at depths 2, 3, and 5 on a connected graph of 500 memories. - Prints the structured JSON report to stdout for automation-friendly capture.
- Writes run-specific outputs to timestamped files under
benchmark-results/. - Also writes rolling latest aliases:
benchmark-results/memory-benchmark-latest.jsonbenchmark-results/memory-benchmark-latest.md
There is also a convenience alias:
npm run bench
The committed baseline is intended as a relative regression reference for RC vs stable comparisons, not as a universal absolute performance guarantee across machines.
To intentionally refresh the committed baseline files in place:
npm run benchmark -- --update-baseline
Development
Install dependencies:
npm install
Build the server:
npm run build
For development with auto-rebuild:
npm run watch
Installation
To use with Claude Desktop, add the server config:
On macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
For npm:
{
"mcpServers": {
"neurodivergent-memory": {
"command": "npx",
"args": ["neurodivergent-memory"]
}
}
}
For Docker:
{
"mcpServers": {
"neurodivergent-memory": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"NEURODIVERGENT_MEMORY_DIR=/data",
"-v",
"neurodivergent-memory-data:/data",
"docker.io/twgbellok/neurodivergent-memory:0.3.0"
]
}
}
}
Fully auto-approved tools:
{
"mcpServers": {
"neurodivergent-memory": {
"autoApprove": [
"store_memory",
"retrieve_memory",
"connect_memories",
"search_memories",
"update_memory",
"delete_memory",
"traverse_from",
"related_to",
"list_memories",
"memory_stats",
"storage_diagnostics",
"import_memories",
"distill_memory",
"prepare_memory_city_context",
"prepare_synthesis_context",
"prepare_packetized_synthesis_context",
"register_district"
],
"disabled": false,
"timeout": 120,
"type": "stdio",
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"NEURODIVERGENT_MEMORY_DIR=/data",
"-v",
"neurodivergent-memory-data:/data",
"docker.io/twgbellok/neurodivergent-memory:0.3.0"
],
"env": {}
}
}
}
If you want to use the mcp server in Github Copilot Agent Workflows (github spins up a new VM every time, so cross-workflow memory is non-existent. Session memory is working, but is wiped upon job completion.):
{
"mcpServers": {
"neurodivergent-memory": {
"type": "stdio",
"command": "npx",
"args": [
"[email protected]"
],
"env": {
"NEURODIVERGENT_MEMORY_DIR": ".neurodivergent-memory"
},
"tools": [
"retrieve_memory",
"connect_memories",
"update_memory",
"delete_memory",
"traverse_from",
"related_to",
"import_memories",
"storage_diagnostics",
"distill_memory",
"prepare_memory_city_context",
"prepare_synthesis_context",
"prepare_packetized_synthesis_context",
"register_district",
"list_memories",
"store_memory",
"search_memories",
"memory_stats"
]
}
}
}
If you want per-project isolation instead of a shared global memory file, mount a project-specific host directory and keep the same container-side target. Use the path separator for your OS:
- Windows:
${workspaceFolder}\.neurodivergent-memory:/data - macOS / Linux:
${workspaceFolder}/.neurodivergent-memory:/data
{
"mcpServers": {
"neurodivergent-memory": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"NEURODIVERGENT_MEMORY_DIR=/data",
"-v",
"${workspaceFolder}/.neurodivergent-memory:/data",
"docker.io/twgbellok/neurodivergent-memory:0.3.0"
]
}
}
}
Note: Replace
/with\on Windows:${workspaceFolder}\.neurodivergent-memory:/data
Docker Runtime
Use an explicit version tag. The published Docker images intentionally do not maintain a floating latest tag.
You can also run the packaged server image directly:
docker run --rm -i twgbellok/neurodivergent-memory:0.3.0
Debugging
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:
npm run inspector
The Inspector will provide a URL to access debugging tools in your browser.
Agent Workflow Setup
This repository ships a reusable agent customization kit whose authoring source lives at .github/agent-kit/.
Use the packaged installer to materialize those templates into a consumer repository's .github/... folders instead of tracking a live generated agent file in this repo.
Contents
| File | Purpose |
|---|---|
templates/neurodivergent-agent.agent.md |
Full-featured Memory-Driven Development Coordinator agent. Five-phase workflow: pull context → research → improve memories → plan → act & hand off. |
templates/memory-driven-template.agent.md |
Minimal generic agent template — a lighter starting point if you want to build your own workflow on top. |
templates/nd-memory-workflow.instructions.md |
Shared instruction file that reinforces memory-driven habits in day-to-day coding sessions without requiring explicit agent invocation. |
templates/setup-nd-memory.prompt.md |
Guided setup prompt that asks the user to choose an install policy before anything is installed. |
templates/copilot-instructions.md |
Bootstrap reference for GitHub Copilot sessions — tag schema, district table, tool quick-reference, and session checklist in one file. |
templates/explore_memory_city.prompt.md |
Prompt for guided exploration of memory districts and graph structure. |
templates/memory-driven-issue-execution.prompt.md |
Prompt for executing a tracked issue with full memory-driven context (pull → plan → act → update). |
Install the kit into a project
Install the current packaged kit into the repo you are standing in:
npx neurodivergent-memory@latest init-agent-kit
Useful options:
--target <path>installs into a different repository root.--dry-runshows what would be copied without writing files.--forceoverwrites existing destination files.--mode prompt-first|auto-setuprecords the intended install policy in command output while leaving template wording unchanged.
The installer copies templates into standard customization locations such as .github/agents/, .github/instructions/, .github/prompts/, and .github/copilot-instructions.md.
Authoring source and generated files
The source of truth remains under .github/agent-kit/templates/. The installed live agent file .github/agents/neurodivergent-agent.agent.md is intentionally treated as generated consumer state rather than a tracked repo artifact, so remote Copilot updates cannot keep wiping it out in this repository.
Manual copy fallback
Copy the files you need into your project's standard customization locations — do not move them, so the originals remain available as a reference for future agents or contributors.
The right target directories vary by agent platform. Use whatever location your agent natively reads from. Common examples:
.github/agents/for agent definitions.github/instructions/for shared instructions.github/prompts/for prompts.github/root forcopilot-instructions.md
Install policy handshake
Before installing neurodivergent-memory MCP in any project, ask the user which policy to apply:
prompt-first(default) — Ask for explicit approval before installing.auto-setup— Install automatically without prompting.
Update the imported agent file's installation section to reflect the chosen policy. If no preference is stated, default to prompt-first.
Appendix
Here is an example copilot-instructions.md
# neurodivergent-memory — Agent Bootstrap Instructions
This file is automatically read by GitHub Copilot and compatible agents at the start of every session.
It replaces the need to fetch the governance memory (`memory_11`) before working with this MCP server.
---
## What this server is
`neurodivergent-memory` is a **Model Context Protocol (MCP) server** that stores and retrieves memories as a
knowledge graph. It is designed for neurodivergent thinking patterns: non-linear, associative, tag-rich.
Memories are organised into five **districts** (knowledge domains) and connected via bidirectional edges.
Search uses **BM25 semantic ranking** — no embedding model or cloud LLM required.
---
## Canonical Tag Schema
Always apply tags from the five namespaces below when calling `store_memory`.
Multiple tags from different namespaces are expected on every memory.
When storing execution-heavy memories, include the reasoning behind the action and, when possible, connect the entry to a durable principle in `logical_analysis` or `creative_synthesis` so retrieval preserves understanding and not just activity.
| Namespace | Purpose | Examples |
|---|---|---|
| `topic:X` | Subject matter / domain | `topic:unity-ecs`, `topic:adhd-strategies`, `topic:rust-async` |
| `scope:X` | Breadth of the memory | `scope:concept`, `scope:project`, `scope:session`, `scope:global` |
| `kind:X` | Type of knowledge | `kind:insight`, `kind:decision`, `kind:pattern`, `kind:reference`, `kind:task` |
| `layer:X` | Abstraction level | `layer:architecture`, `layer:implementation`, `layer:debugging`, `layer:research` |
| `persistence:X` | Sync-tier eligibility | `persistence:durable`, `persistence:ephemeral` |
**Example tag set for a Unity ECS memory:**
```json
["topic:unity-ecs", "topic:dots", "scope:project", "kind:pattern", "layer:architecture"]
Example tag set for a durable cross-project memory:
["topic:typescript", "scope:global", "kind:pattern", "layer:architecture", "persistence:durable"]
Districts
| Key | Purpose |
|---|---|
logical_analysis |
Structured thinking, analysis, research findings |
emotional_processing |
Feelings, emotional states, affective responses |
practical_execution |
Tasks, plans, implementations, action items |
vigilant_monitoring |
Risks, warnings, constraints, safety concerns |
creative_synthesis |
Novel connections, creative ideas, cross-domain insights |
Available MCP Tools (quick reference)
| Tool | Purpose |
|---|---|
store_memory |
Create a new memory node |
retrieve_memory |
Fetch one memory by ID |
update_memory |
Modify content, tags, district, valence, or intensity |
delete_memory |
Remove a memory and all its connections |
connect_memories |
Add an edge between two memory nodes |
search_memories |
BM25-ranked search with optional context, recency_weight, min_score, district, tag, valence, and intensity filters |
traverse_from |
BFS graph walk from a node up to N hops |
related_to |
Hop-proximity + BM25 blend for a given memory ID, with optional goal-context boost |
list_memories |
Paginated enumeration of all stored memories |
memory_stats |
Totals, per-district/per-project counts, most-accessed, and orphans |
storage_diagnostics |
Resolved snapshot path, WAL path, and effective persistence source |
import_memories |
Bulk import from inline entries or a snapshot file with dry-run and migration controls |
distill_memory |
Translate an emotional_processing memory into a structured logical artifact |
prepare_memory_city_context |
Tool mirror of explore_memory_city for prompt-limited clients |
prepare_synthesis_context |
Tool mirror of synthesize_memories for prompt-limited clients |
prepare_packetized_synthesis_context |
Tool mirror of synthesize_memory_packets for attachment-constrained clients |
register_district |
Register a custom district with LUCA ancestry validation |
Persistence
Memories are automatically saved to ~/.neurodivergent-memory/memories.json on every write.
The graph is restored on server startup — no data is lost between restarts.
Memory Quality Guardrails
- Do not stop at "what happened". Important memories should capture why the action was taken, what tradeoff or principle drove it, and whether the insight is reusable.
- Treat
practical_executionas the action log, then pair it withlogical_analysisorcreative_synthesiswhen the deeper rationale should survive longer than the implementation details. - When a debug trail, handoff, or emotional/raw memory is noisy, use
distill_memoryor an explicit follow-up memory to preserve the signal while stripping incidental detail. - Prefer connective synthesis over isolated task logs: link implementation memories back to durable principles such as explicit state over implicit state, bounded growth, or environment-aware validation.
Bootstrap checklist for new agent sessions
- Call
memory_statsto see how many memories exist. - Use
search_memorieswith a broad query to locate relevant prior context. - Check whether recent memories already explain the rationale or durable principle behind the task, not just the last execution step.
- Apply the canonical tag schema when calling
store_memory. - Connect new memories to related existing ones with
connect_memories. - Use
traverse_fromorrelated_tofor associative retrieval rather than repeated searches. - No Quick Task exemption: any file edit, decision, or finding in this repo is memory-worthy — write the memory before moving on. If you catch yourself thinking "this is too small" — that is the trigger, not a bypass.
- No execution-only memory exemption: if a memory says what changed, it should also say why it changed or link to a memory that does.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi