memwire
Health Gecti
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 10 GitHub stars
Code Uyari
- network request — Outbound network request in benchmark/locomo_web_benchmark.py
Permissions Gecti
- Permissions — No dangerous permissions requested
This project is a self-hosted, open-source memory infrastructure layer for AI agents. It allows applications to store and retrieve persistent conversational memory and knowledge documents using graph-based semantic search.
Security Assessment
The overall risk is Medium. The repository does not request dangerous permissions or execute shell commands, and no hardcoded secrets were found. However, the automated scan flagged an outbound network request in a benchmarking script (`benchmark/locomo_web_benchmark.py`). While this is relatively common for testing tools, users should still inspect the script to ensure no unexpected data is being transmitted. Additionally, as an AI memory layer, the tool inherently processes and stores whatever data is passed to it by your applications. If you are handling sensitive or personal information, you must ensure your self-hosted deployment and underlying databases are properly secured and isolated.
Quality Assessment
The project appears healthy and actively developed. It received a push update very recently (0 days ago), indicating ongoing maintenance. It is protected by the standard Apache 2.0 license, which is excellent for open-source collaboration. Community trust is currently in its early stages with 10 GitHub stars, so while it is not yet widely battle-tested, the foundation and documentation look solid.
Verdict
Use with caution: the core package appears safe and well-licensed, but developers should verify the outbound network requests in the benchmarking files and ensure strict security measures for the sensitive data stored within the database.
Open source self-hosted AI memory infrastructure layer
MemWire
Enterprise-grade, self-hosted AI memory infrastructure layer. Deploy persistent AI memory on-premise or in any cloud with your own LLM and database.
What is MemWire?
MemWire is an open source & enterprise-ready AI memory infrastructure layer. MemWire gives your AI applications persistent, auditable memory with structured, updatable facts, fastest semantic retrieval across conversations and knowledge using graph-based memory.
Truly open-source! MemWire is a fully research and community-driven project.
- Fully customizable — adapt schemas, memory types, and pipelines to your use case
- Self-hosted — run entirely on your local machine, on-premise or in your own cloud
- Multi-tenant — isolate applications, users, and workspaces securely
- Bring your own database — PostgreSQL pgvector, Qdrant, Pinecone, ChromaDB, Weawiate or your preferred stack
- Bring your own LLM — OpenAI, Anthropic, Gemini, Ollama, or any provider
- Deploy anywhere — edge, private cloud, public cloud, air-gapped environments
- Knowledge ingestion — ingest documents (PDF, Excel, CSV, etc.) alongside conversation memory; recalled together at query time
- Auditable — every memory is traceable, categorized (fact, preference, instruction, event, entity), and inspectable
- Feedback loop — reinforce memory paths that led to good responses; unused edges decay over time
Quickstart
Python SDK
Install
pip install memwire
Embedded mode
Data is stored on disk in ./memwire_data/.
from memwire import MemWire, MemWireConfig
config = MemWireConfig(
qdrant_path="./memwire_data", # local vector store
qdrant_collection_prefix="app_",
)
memory = MemWire(config=config)
USER_ID = "alice"
# Add messages to memory
records = memory.add(
user_id=USER_ID,
messages=[{"role": "user", "content": "I prefer dark mode and short answers."}],
)
for r in records:
print(f"[stored] ({r.category}) {r.content}")
# Recall relevant context for a query
result = memory.recall("How should I format my answers?", user_id=USER_ID)
if result.formatted:
print(result.formatted)
# → "alice prefers dark mode and short answers."
# Inject recalled context into your LLM prompt
messages = [
{"role": "system", "content": "You are a helpful assistant."},
]
if result.formatted:
messages.append(
{"role": "system", "content": f"Memory context:\n{result.formatted}"}
)
messages.append({"role": "user", "content": "How should I format my answers?"})
# After you get the LLM response, reinforce the memory paths that were used
memory.feedback(assistant_response="<assistant response here>", user_id=USER_ID)
# Search memories by keyword / semantic similarity
hits = memory.search("dark mode", user_id=USER_ID, top_k=5)
for record, score in hits:
print(f"[{score:.2f}] ({record.category}) {record.content}")
# Inspect stats
stats = memory.get_stats(user_id=USER_ID)
print(stats) # {"memories": 1, "nodes": ..., "edges": ..., "knowledge_bases": 0}
# Always close to flush background writes
memory.close()
With a local Qdrant server
docker run -p 6333:6333 qdrant/qdrant
config = MemWireConfig(
qdrant_url="http://localhost:6333",
qdrant_collection_prefix="app_",
)
memory = MemWire(config=config)
REST API
The api/ folder provides a self-hosted REST API backed by FastAPI and Qdrant.
Start the server
cd api
docker compose up --build # Qdrant + MemWire API on :8000
Store memory
curl -X POST http://localhost:8000/v1/memories \
-H "Content-Type: application/json" \
-d '{
"user_id": "alice",
"app_id": "app_a",
"workspace_id": "team_1",
"messages": [
{ "role": "user", "content": "I prefer dark mode and short answers." }
]
}'
[
{
"memory_id": "mem_3f7a1c2d9e4b",
"user_id": "alice",
"content": "I prefer dark mode and short answers.",
"role": "user",
"category": "preference",
"strength": 1.0
}
]
Recall context
curl -X POST http://localhost:8000/v1/memories/recall \
-H "Content-Type: application/json" \
-d '{
"user_id": "alice",
"app_id": "app_a",
"workspace_id": "team_1",
"query": "How should I format my answers?"
}'
{
"query": "How should I format my answers?",
"supporting": [{ "tokens": ["dark", "mode"], "score": 0.87, "memories": [...] }],
"conflicting": [],
"knowledge": [],
"formatted": "alice prefers dark mode and short answers.",
"has_conflicts": false
}
Search memories
curl -X POST http://localhost:8000/v1/memories/search \
-H "Content-Type: application/json" \
-d '{
"user_id": "alice",
"app_id": "app_a",
"workspace_id": "team_1",
"query": "dark mode",
"limit": 10
}'
[
{
"memory": {
"memory_id": "mem_3f7a1c2d9e4b",
"content": "I prefer dark mode and short answers.",
"category": "preference"
},
"score": 0.94
}
]
See API Reference for configuration options and local development setup.
Customization
All MemWire behaviour is controlled through MemWireConfig. Choose your vector store, embedding model, and LLM provider, then tune recall and graph settings to fit your use case. Learn more.
Supported databases
| Storage | Type | Status | Notes |
|---|---|---|---|
| Qdrant | Vector store | ✅ Supported | Embedded, local server, or Qdrant Cloud |
Supported LLMs
MemWire is model-agnostic. Memory operations like storage, recall, and search work with any language model or provider.
| Provider | Example |
|---|---|
| OpenAI | examples/openai/ |
| Azure OpenAI | examples/azure-openai/ |
| Anthropic, Gemini, Ollama, or any other | Pass the recalled context into any LLM |
Roadmap
See ROADMAP.md for the full plan.
Contributing
PRs and issues are welcome. See CONTRIBUTING.md and GOVERNANCE.md.
License
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi