spark-protocol

skill
Security Audit
Warn
Health Warn
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 7 GitHub stars
Code Pass
  • Code scan — Scanned 1 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose: This tool provides six custom skills (commands) designed for Claude Code, focused on AI-assisted divergent thinking, brainstorming, and cross-referencing scattered project notes to find hidden connections.

Security Assessment: Overall risk: Low. The code scan checked the shell scripts and found no dangerous patterns, hardcoded secrets, or requests for elevated permissions. Because it is designed to cross-reference "your full context," it will naturally read whatever local files and notes you point it to. However, it does not appear to maliciously exfiltrate that data or execute hidden shell commands. Standard caution applies whenever a tool accesses your local file system.

Quality Assessment: The project is brand new and actively maintained, with its last push occurring today. It uses a standard permissive MIT license, making it safe for commercial and private use. The main drawback is extremely low community visibility. With only 7 GitHub stars, the tool has not been widely tested or peer-reviewed by the broader developer community.

Verdict: Use with caution—the code itself appears safe, but the lack of widespread community validation means you should review its file access points before running it on sensitive directories.
SUMMARY

Your AI converges by default. This makes it diverge. Six Claude Code skills for AI-assisted divergent thinking, built on neuroscience.

README.md

Spark Protocol

Your AI converges by default. This makes it diverge.

Every AI tool is built to get from A to B. Spark Protocol finds connections you weren't looking for — cross-referencing your full context overnight and surfacing blind spots before you know to ask.


The problem

Your context is everywhere — issue trackers, chat, meeting notes, memory files, that tab you've had open for a week. Your AI sees what you show it. It doesn't cross-reference all of it.

Spark Protocol does. It runs cross-referencing passes across your full context and finds structural connections nobody is watching for.

A real example: My overnight daemon connected a product initiative to a pattern from a previous role — at midnight, autonomously. Nine hours later, a senior leader asked about the same domain. The context was pre-loaded. One spark session turned 8 weeks of accumulated insights into a framework proposal, delivered to leadership the same day.

Who is this for?

Leaders: Organizational blind spot detection. Your teams build in silos — this cross-references what engineering is doing against what customers are saying against what the market is shifting. Overnight, in the background, results at your morning review.

Builders: A cognitive architecture for AI-assisted divergent thinking. Five Claude Code skills, anti-convergence mechanics, chain capture, trust calibration. Built on neuroscience. Open source.


What it produces (honest numbers)

17 days of tracked use: 72 ideas captured, 24 shipped (33%), 3 changed how the team works. 15 are dormant. 3 have died (waste disposal mechanism validated — expired signals and unconsumed connections get killed).

The survival log tracks every spark — its trust level, provenance, and whether it shipped, stalled, or died.


Six skills

Skill What it does
/ideate Activate divergent mode. Loads fuel, sets guardrails, runs session primer. The entry point.
/spark Ping-pong divergent thinking with anti-convergence mechanics. The core engine.
/dream-spark Cross-referencing across your full context + org signals. The blind spot detector.
/daydream Background micro-spark. Whisper a hunch; a sub-agent chases it without interrupting you.
/delve Process FOMO captures into resonance, patterns, and connections. The absorption layer.
/fomo Zero-friction capture. URL, thought, half-idea — one command, done.

Quick start

git clone https://github.com/get-notch/spark-protocol.git
cd spark-protocol
./install.sh   # 30 seconds — asks for paths, sets up directories

Then:

/ideate

This loads your fuel (dream sparks, FOMO links, recent daylogs), sets anti-convergence guardrails, and prints a session primer. When you're ready to dive in:

/spark "something about onboarding feels like a flocking problem"

Or skip the warmup and go straight in:

/ideate "something about onboarding feels like a flocking problem"

This activates divergent mode and immediately starts a spark session with your anchor.


How it works

The session

Start with /ideate. It loads your accumulated context — dream cross-references, FOMO captures, recent sparks — and activates divergent guardrails for the session. Everything after this point runs under anti-convergence rules: no premature summarizing, no productivity guilt, follow every tangent.

Within an ideate session, you use the other skills freely:

  • /spark <anchor> — full divergent thinking session with chain capture
  • /daydream <whisper> — background micro-spark while you keep talking
  • /dream-spark — cross-referencing pass across your full knowledge base
  • /delve — process a captured FOMO link into resonance and connections
  • /fomo <url or thought> — capture something for later without breaking flow

All outputs accumulate. Routing happens at session close — you decide what becomes an idea file, a ticket, a memory update, or just a logged spark.

The capture → absorb pipeline

/fomo and /delve work as a pair. /fomo captures — zero friction, one command, the link or thought is parked in your FOMO log. No processing, no reading, no judgment. You saw something that pinged your salience filter and you captured it before it disappeared.

/delve completes the thought. It goes INTO the captured content, reads it deeply, extracts the core pattern, cross-references against your memory pool, and produces a verdict: absorb (enrich your context), park (interesting but not urgent), or release (nothing you're missing — FOMO resolved).

The gap between capture and absorption is where knowledge rots. A FOMO link captured but never processed is a slide flagged "suspicious" but never examined by the pathologist. /delve is the examination.

/fomo <url>              → captured (1 second, zero processing)
    ↓ hours/days later
/delve                   → processed (reads content, extracts resonance, verdicts)
    ↓ output
dream-sparks.md          → fuel for /spark and /dream-spark sessions

Inside a spark session

Ignition — provide an anchor. The AI sweeps your context for anything that resonates.

Chain Reaction — ping-pong. You throw a fragment, the AI adds a hop. Neither converges. Anti-convergence mechanics prevent premature summarization.

Cool-down — trace the chain. Rate the session. Assign trust levels. Route outputs.

Anti-convergence

LLMs want to summarize. This fights it:

  • Semantic narrowing → force cross-domain jump
  • Summary impulse → suppress, make one more hop
  • Clarifying question → replace with assertion
  • Energy drop → inject random context

Organizational subconscious

/dream-spark queries your org tools (issue tracker, chat, meeting notes, calendar) and cross-references against your memory pool. Same topic in two unrelated places = connection worth investigating. Runs on schedule or on demand. Details →

Trust calibration

Every output gets a trust level: SEED (raw idea) → REVIEWED (validated by org signals) → VERIFIED (produced an artifact) → SOT (changed how the team works). Trust determines routing — prevents both shipping noise and burying insight. Details →

Memory management

Creative systems grow until they break. Dream-sparks.md hit 1,214 lines and 413KB after 13 days — past the Read tool's limit. The fix: a two-tier architecture inspired by MemPalace. Active sparks (recent, alive, unresolved) stay in the hot file. Archived sparks (released, expired, old swarm bursts) move to cold storage — still searchable, never loaded. Raw beats compressed at our scale. Details →

Cross-session snowball

Session 1 → sparks saved → overnight cross-referencing
Session 2 → richer fuel → better sparks → saved
Session N → connections self-sustain

The deeper architecture

I have ADHD. The chain-reaction pattern — remote associations firing across unrelated domains and fading in seconds — is both the creative engine and the thing I needed to capture. I studied the neuroscience (creativity = frequency of switches between wandering and focus networks, not dominance of either), then built a protocol to sustain that pattern in AI.

The anti-convergence mechanics fight the model's summarization instinct. The trust calibration manages the noise. The survival tracking keeps me honest about what ships vs what just felt like progress.

Caveats: The ADHD-creativity link is more nuanced than headlines suggest. The protocol works for anyone — my ADHD inspired the design, it's not a prerequisite.

Architecture & neuroscience → · Cognitive pipeline → · Claude Code internals →


Validation

Honest numbers → · Original chain capture → · Evidence → · V&V checklist →

References

License

MIT

Reviews (0)

No results found