sktime-mcp
Health Warn
- License — License: BSD-3-Clause
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 6 GitHub stars
Code Pass
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
- Permissions — No dangerous permissions requested
This tool acts as a semantic bridge using the Model Context Protocol (MCP) to expose the sktime machine learning library's registry directly to Large Language Models (LLMs). It enables an AI assistant to programmatically discover, reason about, and execute time-series analysis workflows on local data.
Security Assessment
The overall risk is rated as Low. The automated code scan reviewed 12 files and found no dangerous patterns, hardcoded secrets, or requests for overly broad permissions. The server uses local `stdio` transport by default rather than opening a network port, which minimizes external attack vectors. However, as an execution engine, it allows the LLM to run real sktime workflows on your machine. While no malicious network calls or shell executions were detected in the base code, the underlying nature of any tool that permits an LLM to dynamically instantiate and run code on local data warrants standard caution.
Quality Assessment
The project is actively maintained, with its most recent push occurring today. It benefits from a clean BSD-3-Clause open-source license and has clear, comprehensive documentation. The only notable weakness is its extremely low community visibility; it currently has only 6 GitHub stars, indicating it is in its early stages and has not yet undergone widespread community review.
Verdict
Safe to use, but exercise standard caution regarding which data you allow an LLM to process and execute locally.
An MCP (Model Context Protocol) layer that exposes sktime’s native registry and semantics to an LLM
sktime-mcp
Read the Documentation | PyPI Package
MCP (Model Context Protocol) layer for sktime - Registry-Driven for LLMs
A semantic engine that exposes sktime's native registry and semantics to Large Language Models, enabling them to:
- 🔍 Discover valid estimators
- 🧠 Reason about estimator capabilities
- 🔗 Compose compatible estimators
- ⚡ Execute real sktime workflows on real data
🎯 Design Philosophy
This MCP is not just documentation or static code analysis. It is a semantic engine for programmatic model usage.
Key Principles
sktime as Source of Truth - No AST parsing, no repo indexing, no heuristics. All structure comes from
all_estimators, estimator tags, and sktime's API contracts.Registry-First - Instead of
File → Class → Infer Relationships, we doRegistry → Semantics → Safe Execution.Minimal MCP Surface - Exposes only what an LLM needs: Discovery, Description, Instantiation, Execution.
🛠️ Installation
# Install from source
pip install -e .
# With all optional dependencies
pip install -e ".[all]"
# Development installation
pip install -e ".[dev]"
🚀 Quick Start
Running the MCP Server
# Start the MCP server
sktime-mcp
# Or run directly
python -m sktime_mcp.server
Connecting from an LLM Client
The server uses stdio transport by default, compatible with Claude Desktop and other MCP clients.
Add to your Claude Desktop config (~/.config/claude/claude_desktop_config.json):
{
"mcpServers": {
"sktime": {
"command": "sktime-mcp"
}
}
}
📚 Available Tools
Discovery & Search
1. list_estimators
Discover estimators by task type and capability tags.
Arguments:
task(optional): Task type filter ("forecasting","classification","regression","transformation","clustering")tags(optional): Filter by capability tags (e.g.,{"capability:pred_int": true})limit(optional): Maximum results (default: 50)
Example:
{
"task": "forecasting",
"tags": {
"capability:pred_int": true
},
"limit": 10
}
Returns: List of matching estimators with name, task, and summary info.
2. search_estimators
Search estimators by name or description using text query.
Arguments:
query(required): Search string (case-insensitive)limit(optional): Maximum results (default: 20)
Example:
{
"query": "ARIMA",
"limit": 5
}
Returns: List of estimators matching the search query.
3. describe_estimator
Get detailed information about a specific estimator's capabilities.
Arguments:
estimator(required): Name of the estimator (e.g.,"ARIMA","NaiveForecaster")
Example:
{
"estimator": "ARIMA"
}
Returns: Full estimator details including tags, hyperparameters, docstring, and module path.
4. get_available_tags
List all queryable capability tags across all estimators.
Arguments: None
Returns: List of all available tags (e.g., ["capability:pred_int", "handles-missing-data", ...])
Instantiation
5. instantiate_estimator
Create a single estimator instance and return a handle.
Arguments:
estimator(required): Name of the estimator to instantiateparams(optional): Hyperparameters for the estimator
Example:
{
"estimator": "ARIMA",
"params": {
"order": [1, 1, 1],
"suppress_warnings": true
}
}
Returns: {"success": true, "handle": "est_abc123", "estimator": "ARIMA", "params": {...}}
6. instantiate_pipeline
Create a complete pipeline from a list of components (transformers → forecaster).
Arguments:
components(required): List of estimator names in pipeline orderparams_list(optional): List of parameter dicts for each component
Example:
{
"components": ["ConditionalDeseasonalizer", "Detrender", "ARIMA"],
"params_list": [{}, {}, {"order": [1, 1, 1]}]
}
Returns: {"success": true, "handle": "est_xyz789", "pipeline": "ConditionalDeseasonalizer → Detrender → ARIMA", ...}
Note: This solves the "steps problem" - you don't need to instantiate components separately!
📖 Documentation
Project documentation lives in docs/ and can be served locally with MkDocs:
pip install -e ".[dev]"
mkdocs serve
The MkDocs config is in mkdocs.yml.
Validation
7. validate_pipeline
Check if a proposed pipeline composition is valid before instantiation.
Arguments:
components(required): List of estimator names in pipeline order
Example:
{
"components": ["Detrender", "ARIMA"]
}
Returns: {"valid": true/false, "errors": [...], "warnings": [...], "suggestions": [...]}
Execution
8. fit_predict
Execute a complete workflow: load dataset, fit estimator, and generate predictions.
Arguments:
estimator_handle(required): Handle frominstantiate_estimatororinstantiate_pipelinedataset(required): Dataset name (e.g.,"airline","sunspots","lynx")horizon(optional): Forecast horizon (default: 12)
Example:
{
"estimator_handle": "est_abc123",
"dataset": "airline",
"horizon": 12
}
Returns: {"success": true, "predictions": {1: 450.2, 2: 455.1, ...}, "horizon": 12}
Datasets
9. list_datasets
List all available demo datasets for testing and experimentation.
Arguments: None
Returns: {"success": true, "datasets": ["airline", "sunspots", "lynx", "shampoo", ...]}
Handle Management
10. list_handles
List all active estimator handles and their status.
Arguments: None
Returns: List of active handles with metadata (estimator name, fitted status, creation time)
11. release_handle
Release an estimator handle and free memory.
Arguments:
handle(required): Handle ID to release
Example:
{
"handle": "est_abc123"
}
Returns: {"success": true, "message": "Handle released"}
🔄 Example LLM Flows
Flow 1: Simple Forecasting
User Prompt: "Forecast monthly airline passengers using a probabilistic model."
LLM Steps:
Discover Models
list_estimators(task="forecasting", tags={"capability:pred_int": true})Inspect Choice
describe_estimator(estimator="ARIMA")Instantiate
instantiate_estimator(estimator="ARIMA", params={"order": [1,1,1]}) → Returns: {"handle": "est_abc123"}Execute
fit_predict(estimator_handle="est_abc123", dataset="airline", horizon=12) → Returns: {"predictions": {1: 450.2, 2: 455.1, ...}}
Flow 2: Pipeline Forecasting ⭐
User Prompt: "Forecast with deseasonalization and detrending preprocessing."
LLM Steps:
Validate Composition
validate_pipeline(components=["ConditionalDeseasonalizer", "Detrender", "ARIMA"]) → Returns: {"valid": true}Instantiate Pipeline (single call!)
instantiate_pipeline( components=["ConditionalDeseasonalizer", "Detrender", "ARIMA"], params_list=[{}, {}, {"order": [1,1,1]}] ) → Returns: {"handle": "est_xyz789", "pipeline": "ConditionalDeseasonalizer → Detrender → ARIMA"}Execute
fit_predict(estimator_handle="est_xyz789", dataset="airline", horizon=12) → Returns: {"predictions": {...}}
📁 Project Structure
sktime_mcp/
├── src/sktime_mcp/
│ ├── server.py # MCP server entry point
│ ├── registry/ # Registry interface & tag resolver
│ ├── composition/ # Pipeline composition validator
│ ├── runtime/ # Execution engine & handle management
│ └── tools/ # MCP tool implementations
├── examples/ # Usage examples
└── tests/ # Test suite
🧪 Running Tests
pytest tests/
Reviews (0)
Sign in to leave a review.
Leave a reviewNo results found