MemOS

mcp
Security Audit
Warn
Health Pass
  • License — License: Apache-2.0
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 8909 GitHub stars
Code Warn
  • network request — Outbound network request in apps/MemOS-Cloud-OpenClaw-Plugin/index.js
  • network request — Outbound network request in apps/MemOS-Cloud-OpenClaw-Plugin/lib/arms-reporter.js
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This tool serves as a self-evolving memory operating system for LLMs and AI agents. It provides ultra-persistent memory, hybrid retrieval, and cross-task skill reuse to help agents remember context and save computational tokens.

Security Assessment
The overall risk is rated as Medium. The tool does not request dangerous system permissions or execute hidden shell commands, and no hardcoded secrets were found. However, the audit flagged outbound network requests inside the cloud plugin files (specifically `index.js` and `arms-reporter.js`). Because this tool is designed to handle your AI's conversational context and memory, it inherently accesses potentially sensitive data. If you are using the cloud service version rather than the 100% local plugin, your data will be transmitted over the internet to external servers.

Quality Assessment
The project is in excellent health and demonstrates high community trust, backed by nearly 9,000 GitHub stars. It is under active development, with its most recent code push occurring today. Furthermore, it is properly open-source and legally safe to integrate, as it is distributed under the standard Apache-2.0 license.

Verdict
Use with caution: the local plugin is safe to use, but you should audit the cloud plugin's outbound network requests before letting it handle sensitive agent data.
SUMMARY

Self-evolving memory OS for LLM & AI Agents: ultra-persistent memory, hybrid-retrieval, and cross-task skill reuse, with 35.24% token savings

README.md
MemOS Banner

MemOS Logo  MemOS 2.0 Stardust(星尘)


Docs ArXiv X Discord Resources

🎯 +43.70% Accuracy vs. OpenAI Memory
🏆 Top-tier Long-term Memory + Personalization
💰 Saves 35.24% Memory Tokens
LoCoMo 75.80 • LongMemEval +40.43% • PrefEval-10 +2568% • PersonaMem +40.75%

🧠 MemOS Plugin: Persistent Memory for Your AI Agents ✨

MemOS Plugin Banner

Your lobsters and Hermes Agents now have the best memory system — choose Cloud Service or Self-hosted to get started 🏃🏻

🔌 Plugin
💡 Core Features
🧩 Resources
👧🏻 Hermes Agent Local Plugin
  • Visual Management via Web UI.
  • 100% local, Hybrid retrieval, Smart dedup,
    Skill evolution, Multi-Agent knowledge hub.
📑 Docs · 🐙 GitHub
☁️ OpenClaw Cloud Plugin 🖥️ MemOS Dashboard · 📖 Full Tutorial
🦐 OpenClaw Local Plugin
  • 0 cloud dependency by local SQLite.
  • FTS5 + vector search, Task auto-summarization,
    Multi-Agent memory isolation & skill sharing.
🌐 Homepage · 📑 Docs · 📦 NPM

👾 MemOS: Memory Operating System for LLM & AI Agents

MemOS is a Memory Operating System for LLMs and AI agents that unifies store / retrieve / manage for long-term memory, enabling context-aware and personalized interactions with KB, multi-modal, tool memory, and enterprise-grade optimizations built in.

Key Features

  • Unified Memory API: A single API to add, retrieve, edit, and delete memory—structured as a graph, inspectable and editable by design, not a black-box embedding store.
  • Multi-Modal Memory: Natively supports text, images, tool traces, and personas, retrieved and reasoned together in one memory system.
  • Multi-Cube Knowledge Base Management: Manage multiple knowledge bases as composable memory cubes, enabling isolation, controlled sharing, and dynamic composition across users, projects, and agents.
  • Asynchronous Ingestion via MemScheduler: Run memory operations asynchronously with millisecond-level latency for production stability under high concurrency.
  • Memory Feedback & Correction: Refine memory with natural-language feedback—correcting, supplementing, or replacing existing memories over time.

News

  • 2026-04-10 · 👧🏻 MemOS Hermes Agent Local Plugin
    Official Hermes Agent memory plugins launched: Hybrid retrieval (FTS5 + vector), smart dedup, tiered skill evolution, multi-agent collaboration. 100% local, zero cloud dependency.

  • 2026-03-08 · 🦞 MemOS OpenClaw Plugin — Cloud & Local
    Official OpenClaw memory plugins launched. Cloud Plugin: hosted memory service with 72% lower token usage and multi-agent memory sharing (MemOS-Cloud-OpenClaw-Plugin). Local Plugin (v1.0.0): 100% on-device memory with persistent SQLite, hybrid search (FTS5 + vector), task summarization & skill evolution, multi-agent collaboration, and a full Memory Viewer dashboard.

  • 2025-12-24 · 🎉 MemOS v2.0: Stardust (星尘) Release
    Comprehensive KB (doc/URL parsing + cross-project sharing), memory feedback & precise deletion, multi-modal memory (images/charts), tool memory for agent planning, Redis Streams scheduling + DB optimizations, streaming/non-streaming chat, MCP upgrade, and lightweight quick/full deployment.

    New Features

    Knowledge Base & Memory

    • Added knowledge base support for long-term memory from documents and URLs

    Feedback & Memory Management

    • Added natural language feedback and correction for memories
    • Added memory deletion API by memory ID
    • Added MCP support for memory deletion and feedback

    Conversation & Retrieval

    • Added chat API with memory-aware retrieval
    • Added memory filtering with custom tags (Cloud & Open Source)

    Multimodal & Tool Memory

    • Added tool memory for tool usage history
    • Added image memory support for conversations and documents
    📈 Improvements

    Data & Infrastructure

    • Upgraded database for better stability and performance

    Scheduler

    • Rebuilt task scheduler with Redis Streams and queue isolation
    • Added task priority, auto-recovery, and quota-based scheduling

    Deployment & Engineering

    • Added lightweight deployment with quick and full modes
    🐞 Bug Fixes

    Memory Scheduling & Updates

    • Fixed legacy scheduling API to ensure correct memory isolation
    • Fixed memory update logging to show new memories correctly
  • 2025-08-07 · 🎉 MemOS v1.0.0 (MemCube) Release
    First MemCube release with a word-game demo, LongMemEval evaluation, BochaAISearchRetriever integration, improved search capabilities, and the official Playground launch.

    New Features

    Playground

    • Expanded Playground features and algorithm performance.

    MemCube Construction

    • Added a text game demo based on the MemCube novel.

    Extended Evaluation Set

    • Added LongMemEval evaluation results and scripts.
    📈 Improvements

    Plaintext Memory

    • Integrated internet search with Bocha.
    • Expanded graph database support.
    • Added contextual understanding for the tree-structured plaintext memory search interface.
    🐞 Bug Fixes

    KV Cache Concatenation

    • Fixed the concat_cache method.

    Plaintext Memory

    • Fixed graph search-related issues.
  • 2025-07-07 · 🎉 MemOS v1.0: Stellar (星河) Preview Release
    A SOTA Memory OS for LLMs is now open-sourced.

  • 2025-07-04 · 🎉 MemOS Paper Release
    MemOS: A Memory OS for AI System is available on arXiv.

  • 2024-07-04 · 🎉 Memory3 Model Release at WAIC 2024
    The Memory3 model, featuring a memory-layered architecture, was unveiled at the 2024 World Artificial Intelligence Conference.


🚀 Quick-start Guide

☁️ 1、Cloud API (Hosted)

Get API Key

Next Steps

🖥️ 2、Self-Hosted (Local/Private)

  1. Get the repository.
    git clone https://github.com/MemTensor/MemOS.git
    cd MemOS
    pip install -r ./docker/requirements.txt
    
  2. Configure docker/.env.example and copy to MemOS/.env
  • The OPENAI_API_KEY,MOS_EMBEDDER_API_KEY,MEMRADER_API_KEY and others can be applied for through BaiLian.
  • Fill in the corresponding configuration in the MemOS/.env file.
  • Supported LLM providers: OpenAI, Azure OpenAI, Qwen (DashScope), DeepSeek, MiniMax, Ollama, HuggingFace, vLLM. Set MOS_CHAT_MODEL_PROVIDER to select the backend (e.g., openai, qwen, deepseek, minimax).
  1. Start the service.
  • Launch via Docker

    Tips: Please ensure that Docker Compose is installed successfully and that you have navigated to the docker directory (via cd docker) before executing the following command.
    # Enter docker directory
    docker compose up
    
    For detailed steps, see theDocker Reference.
  • Launch via the uvicorn command line interface (CLI)

    Tips: Please ensure that Neo4j and Qdrant are running before executing the following command.
    cd src
    uvicorn memos.api.server_api:app --host 0.0.0.0 --port 8001 --workers 1
    
    For detailed integration steps, see the CLI Reference.

Basic Usage (Self-Hosted)

  • Add User Message
    import requests
    import json
    
    data = {
        "user_id": "8736b16e-1d20-4163-980b-a5063c3facdc",
        "mem_cube_id": "b32d0977-435d-4828-a86f-4f47f8b55bca",
        "messages": [
            {
                "role": "user",
                "content": "I like strawberry"
            }
        ],
        "async_mode": "sync"
    }
    headers = {
        "Content-Type": "application/json"
    }
    url = "http://localhost:8000/product/add"
    
    res = requests.post(url=url, headers=headers, data=json.dumps(data))
    print(f"result: {res.json()}")
    
  • Search User Memory
    import requests
    import json
    
    data = {
        "query": "What do I like",
        "user_id": "8736b16e-1d20-4163-980b-a5063c3facdc",
        "mem_cube_id": "b32d0977-435d-4828-a86f-4f47f8b55bca"
    }
    headers = {
        "Content-Type": "application/json"
    }
    url = "http://localhost:8000/product/search"
    
    res = requests.post(url=url, headers=headers, data=json.dumps(data))
    print(f"result: {res.json()}")
    

📚 Resources

  • Awesome-AI-Memory
    This is a curated repository dedicated to resources on memory and memory systems for large language models. It systematically collects relevant research papers, frameworks, tools, and practical insights. The repository aims to organize and present the rapidly evolving research landscape of LLM memory, bridging multiple research directions including natural language processing, information retrieval, agentic systems, and cognitive science.

    Get started 👉🏻 IAAR-Shanghai/Awesome-AI-Memory

  • MemOS Cloud OpenClaw Plugin
    Official OpenClaw lifecycle plugin for MemOS Cloud. It automatically recalls context from MemOS before the agent starts and saves the conversation back to MemOS after the agent finishes.

    Get started 👉🏻 MemTensor/MemOS-Cloud-OpenClaw-Plugin


💬 Community & Support

Join our community to ask questions, share your projects, and connect with other developers.

  • GitHub Issues: Report bugs or request features in our GitHub Issues.
  • GitHub Pull Requests: Contribute code improvements via Pull Requests.
  • GitHub Discussions: Participate in our GitHub Discussions to ask questions or share ideas.
  • Discord: Join our Discord Server.
  • WeChat: Scan the QR code to join our WeChat group.
QR Code

📜 Citation

[!NOTE]
We publicly released the Short Version on May 28, 2025, making it the earliest work to propose the concept of a Memory Operating System for LLMs.

If you use MemOS in your research, we would appreciate citations to our papers.


@article{li2025memos_long,
  title={MemOS: A Memory OS for AI System},
  author={Li, Zhiyu and Song, Shichao and Xi, Chenyang and Wang, Hanyu and Tang, Chen and Niu, Simin and Chen, Ding and Yang, Jiawei and Li, Chunyu and Yu, Qingchen and Zhao, Jihao and Wang, Yezhaohui and Liu, Peng and Lin, Zehao and Wang, Pengyuan and Huo, Jiahao and Chen, Tianyi and Chen, Kai and Li, Kehang and Tao, Zhen and Ren, Junpeng and Lai, Huayi and Wu, Hao and Tang, Bo and Wang, Zhenren and Fan, Zhaoxin and Zhang, Ningyu and Zhang, Linfeng and Yan, Junchi and Yang, Mingchuan and Xu, Tong and Xu, Wei and Chen, Huajun and Wang, Haofeng and Yang, Hongkang and Zhang, Wentao and Xu, Zhi-Qin John and Chen, Siheng and Xiong, Feiyu},
  journal={arXiv preprint arXiv:2507.03724},
  year={2025},
  url={https://arxiv.org/abs/2507.03724}
}

@article{li2025memos_short,
  title={MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models},
  author={Li, Zhiyu and Song, Shichao and Wang, Hanyu and Niu, Simin and Chen, Ding and Yang, Jiawei and Xi, Chenyang and Lai, Huayi and Zhao, Jihao and Wang, Yezhaohui and others},
  journal={arXiv preprint arXiv:2505.22101},
  year={2025},
  url={https://arxiv.org/abs/2505.22101}
}

@article{yang2024memory3,
author = {Yang, Hongkang and Zehao, Lin and Wenjin, Wang and Wu, Hao and Zhiyu, Li and Tang, Bo and Wenqiang, Wei and Wang, Jinbo and Zeyun, Tang and Song, Shichao and Xi, Chenyang and Yu, Yu and Kai, Chen and Xiong, Feiyu and Tang, Linpeng and Weinan, E},
title = {Memory$^3$: Language Modeling with Explicit Memory},
journal = {Journal of Machine Learning},
year = {2024},
volume = {3},
number = {3},
pages = {300--346},
issn = {2790-2048},
doi = {https://doi.org/10.4208/jml.240708},
url = {https://global-sci.com/article/91443/memory3-language-modeling-with-explicit-memory}
}

🙌 Contributing

We welcome contributions from the community! Please read our contribution guidelines to get started.


📄 License

MemOS is licensed under the Apache 2.0 License.

Reviews (0)

No results found