JKRiver
Riverse — A personal AI agent designed for your own devices. Persistent memory, offline cognition, grows with every conversation. River Algorithm. Local-first.
Riverse — River Algorithm
A personal AI agent that runs on your own machine — persistent memory, offline cognition, grows with every conversation. All data stays local.
📖 Full documentation: wangjiake.github.io/riverse-docs
RAG Memory vs. River Algorithm
Why similarity search isn't enough for a personal AI.
| RAG / Existing AI Memory | Riverse / River Algorithm | |
|---|---|---|
| Retrieval | Keyword / vector similarity — finds text that "looks similar" | Personal profile weighting — ranked by relevance to you |
| Timeline | No timeline — 3 years ago and yesterday weigh the same | Temporal evolution — recent events carry momentum, old patterns solidify or fade |
| Reasoning | No causal inference — "headaches" and "job change" stay as isolated fragments | Causal chains — connects "new job → stress → sleep problems → headaches" |
| Confidence | All facts have equal weight | Observations progress: suspected → confirmed → established |
| Data ownership | Cloud-dependent, platform-owned | Local-first — your device, your data |
💬 Scenario: You said "I've been having headaches lately"
Two months ago, you mentioned changing jobs.
| RAG Approach | River Algorithm |
|---|---|
|
🧑 I've been having headaches lately 🤖 I'm sorry to hear that. Here are some common causes of headaches: dehydration, screen time, stress... Would you like some tips? Vector search found "headache" in medical Q&A corpus. No connection to your life context. |
🧑 I've been having headaches lately 🤖 You started the new management role about two months ago, and around the same time your sleep got worse — you mentioned staying up past 1am several times. The headaches might be linked. How's the work pressure been? Want me to track this as a health pattern? Causal chain: job change → sleep disruption → headaches. Timeline-aware, personally weighted. |
What is Riverse?
You've been talking to AI for years, but no AI actually knows you. Switch platforms and you start from zero. Your data is scattered across clouds you don't control.
Riverse is a personal AI agent that runs on your own machine. It remembers every conversation and consolidates memories offline, like human sleep — extracting your personality, preferences, experiences, and relationships into a continuously growing profile. The more you talk, the deeper it understands you. All data stays local and belongs to you.
River Algorithm
Conversations flow like water, key information settles like riverbed sediment, progressively upgrading from "suspected" to "confirmed" to "established" through multi-turn verification. Offline consolidation (Sleep) acts as the river's self-purification.
Conversation flows in ──→ Erosion ──→ Sedimentation ──→ Shapes cognition ──→ Keeps flowing
│ │ │
│ │ └─ Confirmed knowledge → stable bedrock
│ └─ Key info → observations, hypotheses, profiles
└─ Outdated beliefs washed away, replaced by new insights
- Flow — Every conversation is water flowing through. The river never stops; understanding of you evolves continuously
- Sediment — Key information settles like silt: facts sink into profiles, emotions into observations, patterns into hypotheses
- Purify — Sleep is the river's self-purification: washing away outdated info, resolving contradictions, integrating fragments
Features
- Persistent Memory — Remembers across sessions, builds a timeline-based profile that evolves with you
- Offline Consolidation (Sleep) — Extracts insights, resolves contradictions, strengthens confirmed knowledge
- Multi-Modal Input — Text, voice, images, files — all understood natively
- Pluggable Tools — Finance tracking, health sync (Withings), web search, vision, TTS, and more; toggle or remove any tool from the System page
- YAML Skills — Custom behaviors triggered by keyword or cron schedule; install from SkillHub or paste YAML directly in the dashboard
- Outsource / Task Agent — Delegate complex multi-step tasks to an autonomous sub-agent; preview the plan, confirm, then track real-time progress on the
/outsourcepage - Session Management — Rename and pin conversations to quickly find what matters
- External Agents — Connect Home Assistant, n8n, Dify and more via
agents_*.yaml - MCP Protocol — Model Context Protocol support for Gmail and other MCP servers
- Multi-Channel — Telegram, Discord, REST API, WebSocket, CLI, Web Dashboard
- Local-First — Ollama by default, auto-escalates to OpenAI / DeepSeek when needed
- Proactive Outreach — Follows up on events, checks in when idle, respects quiet hours
- Semantic Search — BGE-M3 embeddings, retrieves relevant memories by meaning
- Multi-language Prompts — English, Chinese, Japanese — switch with one setting
On accuracy: No LLM today is specifically trained for personal profile extraction, so results may occasionally be off. You can reject incorrect memories or close outdated ones in the Web Dashboard. As conversations accumulate, the River Algorithm continuously self-corrects through multi-turn verification and contradiction detection.
Quick Start
Option A: Docker Compose (Recommended)
The fastest way to get started. No Python, PostgreSQL, or configuration file needed.
# 1. Get the compose file
mkdir jkriver && cd jkriver
curl -O https://raw.githubusercontent.com/wangjiake/JKRiver/main/docker/docker-compose.yaml
# 2. Start everything
docker compose pull && docker compose up -d
# 3. Get your access token (generated automatically on first start)
docker compose logs jkriver 2>&1 | grep "ACCESS_TOKEN"
Open http://localhost:1234 in your browser, enter the access token, then go to System to set your API key. That's it.
| Service | URL | What it does |
|---|---|---|
| JKRiver | http://localhost:1234 | Web chat + system config |
| RiverHistory | http://localhost:2345 | Profile viewer |
| API Docs | http://localhost:8400/docs | REST API reference |
Full Docker guide (chat bots, data import, demo, configuration): docker/README.md
Option B: From Source
1. Prerequisites
- Python 3.10+
- PostgreSQL 16+ — Install guide
- Ollama (optional) — ollama.ai, only needed for local LLM mode
2. Clone and install
git clone https://github.com/wangjiake/JKRiver.git
cd JKRiver
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
3. Set up PostgreSQL
# Create the database (replace YOUR_USERNAME with your PostgreSQL user)
createdb -h localhost -U YOUR_USERNAME Riverse
# Create all tables
psql -h localhost -U YOUR_USERNAME -d Riverse -f agent/schema.sql
Tip: On macOS/Linux, run
whoamito find your username. On Windows with default PostgreSQL, the user is usuallypostgres.
4. Configure
cp settings.yaml.default settings.yaml
Edit settings.yaml — at minimum, change these:
database:
user: "YOUR_USERNAME" # Your PostgreSQL username
llm_provider: "openai" # "openai" for cloud API, "local" for Ollama
openai:
api_key: "sk-your-key-here" # Required if using openai provider
Full configuration guide: wangjiake.github.io/riverse-docs
5. Run
Web Dashboard (recommended) — starts FastAPI backend + Flask frontend together:
python scripts/start_local.py
| Service | URL |
|---|---|
| Web Dashboard | http://localhost:1234 |
| API Docs | http://localhost:8400/docs |
Or start services individually:
uvicorn agent.api:app --host 127.0.0.1 --port 8400 # FastAPI backend
python web.py # Flask frontend (http://localhost:1234)
python -m agent.main # CLI mode
python -m agent.telegram_bot # Telegram Bot
python -m agent.discord_bot # Discord Bot
Note: The web dashboard requires both services running.
start_local.pyhandles this automatically and also starts Telegram/Discord bots if enabled insettings.yaml.
Testing
# Quick checks — verify imports and database schema (no LLM needed)
python tests/test_imports.py
python tests/test_db.py
# End-to-end pipeline test — requires LLM + database
python tests/test_demo_pipeline.py # demo2.json (52 sessions, English)
python tests/test_demo_pipeline.py tests/data/demo.json # demo.json (50 sessions, Chinese)
python tests/test_demo_pipeline.py --sessions 3 # Quick smoke test (3 sessions only)
# Clean up test data from database
python tests/test_demo_pipeline.py --clean
Test data is included in tests/data/. No external dependencies needed.
Tech Stack
| Layer | Technology |
|---|---|
| Runtime | Python 3.10+, PostgreSQL 16+ |
| Local LLM | Ollama (any compatible model) |
| Cloud LLM | OpenAI GPT-4o / DeepSeek (fallback) |
| Embeddings | Ollama + BGE-M3 (auto-accelerated with pgvector if installed) |
| REST API | FastAPI + Uvicorn |
| Web Dashboard | Flask |
| Telegram / Discord | python-telegram-bot / discord.py |
| Voice / Vision | Whisper-1, GPT-4 Vision, LLaVA |
| TTS | Edge TTS |
Security Notice
Riverse is designed as a single-user, local-first application. The Web Dashboard is protected by the access token generated on first startup. However, the REST API (port 8400) has no authentication — do not expose it to the public internet. If you need remote access, place it behind a reverse proxy (e.g. Nginx, Caddy) with authentication, or use an SSH tunnel.
License
| Use Case | License |
|---|---|
| Personal / Open Source | AGPL-3.0 — free to use, modifications must be open-sourced |
| Commercial / Closed Source | Contact [email protected] |
Contact
- X (Twitter): @JKRiverse
- Discord: Join
- Email: [email protected]
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found