lmstudio-toolpack
Health Gecti
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 25 GitHub stars
Code Basarisiz
- eval() — Dynamic code execution via eval() in MCPs/python-sandbox.py
- exec() — Shell command execution in MCPs/python-sandbox.py
Permissions Gecti
- Permissions — No dangerous permissions requested
This is a Python-based Model Context Protocol (MCP) toolpack designed for local LLMs. It provides a single environment to easily add web search, long-term memory, and a Python code execution sandbox to an AI agent.
Security Assessment
The overall risk is High. The project explicitly uses `eval()` and `exec()` within its Python sandbox environment to let AI agents run dynamically generated code. While the developer openly warns users about this in the documentation, it represents a massive security liability. If connected to a malicious or unpredictable LLM, the agent could execute dangerous shell commands on the host machine. There are no hardcoded secrets and no explicitly dangerous permissions required, but the inherent design of the sandbox tool exposes your local system to severe arbitrary code execution risks.
Quality Assessment
The project appears to be actively maintained, with its last push occurring just today. It is protected under the standard MIT license and has garnered 25 GitHub stars, indicating a fair level of community interest and basic trust. However, the ecosystem around it is still relatively small.
Verdict
Use with caution: while the code itself is transparent and the pack is highly active, only run the Python sandbox in isolated environments to prevent unchecked AI actions from damaging your host system.
A MCP stdio toolpack for local LLMs
Local MCP Tools Collection
A small collection of Model Context Protocol (MCP) tools, build for local LLMs. One venv, many options.
Why is exists?
The MCP server now is mostly scattered. There is no simple tool-pack. We need to set it up per-tool.
This tool pack is targeted for local convenient use. I will expand the collections through time.
Make LocalLLMs more powerful yet simplier.
Features
- MCP json Configuration file generation: Run
main.pyand go through the wizard to complete the generation - One venv for multiple MCP servers
MCP Servers
- Web Search: Use duckduckgo as search engine, fetch and summarize top results
- Python SandBox: Allow Agents to run python, use numpy and sympy, good for math
- Longterm-Memory: For Agents to memories things for longterm use.
Notes
- It is default using stdio, You can set it to http in
GlobalConfig - In
python-sandbox.py,exec()function is used to allow agent execute python scripts, keep an eye on Agents.
Requirements
- Python >= 3.13
- Managed with
uv
Install
Using uv:
uv sync
Run the MCP Server
python python-sandbox.py
The server communicates over stdio (FastMCP). Point your MCP-compatible client at the executable command above.
Tool Usage Examples
Run main.py for json configuration auto generation.
And you will get something like this:
{
"mcpServers": {
"memory": {
"command": "E:\\LMStudio\\mcp\\lmstudio-toolpack\\.venv\\Scripts\\python.exe",
"args": [
"E:\\LMStudio\\mcp\\lmstudio-toolpack\\MCPs\\Memory.py"
]
},
"python-sandbox": {
"command": "E:\\LMStudio\\mcp\\lmstudio-toolpack\\.venv\\Scripts\\python.exe",
"args": [
"E:\\LMStudio\\mcp\\lmstudio-toolpack\\MCPs\\python-sandbox.py"
]
},
"websearch": {
"command": "E:\\LMStudio\\mcp\\lmstudio-toolpack\\.venv\\Scripts\\python.exe",
"args": [
"E:\\LMStudio\\mcp\\lmstudio-toolpack\\MCPs\\WebSearch.py"
]
}
}
}
Change the name if you needed
Another Idea
If you choose using http. You can use 1mcp to unify them all.
And run it on a remote server.
Eg. Connect a Resberry PI to TailScale and set it up remotely.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi
