Train-in-Silence
Health Gecti
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 39 GitHub stars
Code Uyari
- network request — Outbound network request in scripts/debug_aws_provider.py
- network request — Outbound network request in scripts/refresh_catalogs.py
- network request — Outbound network request in tests/test_gpufinder.py
Permissions Gecti
- Permissions — No dangerous permissions requested
Bu listing icin henuz AI raporu yok.
The first Task-Aware MCP server and automated VRAM calculator for LLM fine-tuning. Instantly snipe the cheapest, fastest GPUs across 10+ cloud providers.
Train in Silence
The first Task-Aware MCP server for LLM fine-tuning. Stop comparing GPU prices. Start training.You want to fine-tune an LLM. You open Vast.ai, RunPod, AWS, etc. -- a dozen tabs, a dozen pricing models, a dozen different ways to describe a GPU. Which option can run your code, and do so more cheaply and quickly? An hour later you're still in a spreadsheet and haven't written a single line of training code.
Train in Silence is the first Task-Aware MCP server for LLM fine-tuning. It doesn't just list prices; it understands your workload. Describe your training job once, and it calculates the required VRAM/FLOPs to return the cheapest, fastest, and most balanced hardware options across a dozen cloud providers -- in seconds.
Quickstart
Option A: Ask Claude Code (recommended)
Install the library and register it as a tool in Claude Code:
pip install train-in-silence
claude mcp add tis --scope user -- tis-mcp
Then just ask in natural language:
> I want to run the fine-tune code in my current directory, and finish it within 20 hours.
Find me the best GPU options across Vast.ai, RunPod, and Lambda.
Claude Code calls TIS behind the scenes and returns a structured recommendation -- no YAML, no config files, no manual comparison.
Option B: CLI
pip install train-in-silence
tis recommend examples/request.yaml
$ tis recommend examples/request.yaml
Found 5 viable configurations
Lowest cost: $4.32 | Fastest runtime: 2.1 hours
#1 [cheapest] RunPod 1x A6000 (48 GB) $4.32 / 6.8 h
#2 [fastest] Vast.ai 2x A100 (80 GB) $9.10 / 2.1 h
#3 [balanced] RunPod 1x A100 (80 GB) $6.40 / 3.2 h
...
Note: Output above is illustrative. Actual results depend on live market data.
Use It Your Way
| Channel | Command | Docs |
|---|---|---|
| CLI | tis recommend request.yaml |
CLI Guide |
| REST API | uvicorn tis.api.server:app |
API Reference |
| Claude Code | claude mcp add tis --scope user --tis-mcp |
MCP Guide |
| Claude Desktop | Add tis-mcp to claude_desktop_config.json |
MCP Guide |
Market Providers
TIS aggregates live pricing across a dozen GPU clouds. API keys are optional: if not provided, TIS automatically falls back sequentially to universal live aggregators (GPUHunt/GPUFinder) or bundled sample data.
| Provider Class | Included Platforms | Auth Required |
|---|---|---|
| Dedicated | Vast.ai, RunPod | Optional (Highly Recommended) |
| Aggregated | Vast.ai, RunPod, AWS, CoreWeave, Lambda Labs, Tensordock, Vultr, GCP, Azure, OCI, Nebius, CloudRift, Cudo Compute, Verda | None (Auto-fallback) |
Every recommendation clearly identifies its Source of Truth (e.g., live:official, live:gpuhunt, live:gpufinder, or sample) so you always know how fresh the data is. -> Provider details
Architecture at a Glance
YAML request -> Estimator -> Market Aggregator -> Optimizer -> Pareto Frontier -> Ranked Output
| | |
VRAM/FLOPs 10+ GPU Clouds Cost vs. Time
Each recommendation shows where the data came from (live or sample) and flags any estimated fields -- no silent guesswork. -> Architecture deep-dive
Known Limitations
- Estimation model is fixed with no built-in calibration; future versions will calibrate using real runtimes.
- Upstream Provider API schema changes will require synchronized mapping updates.
🚧 Project Status & Contribution
This project is currently in the experimental development stage (Experimental).
- Issues & Suggestions: If you encounter any bugs, inaccurate estimations, or have suggestions for improvement, please feel free to submit a GitHub Issue.
- Contribute: If you'd like to improve the code or supplement hardware metadata, Pull Requests are highly welcome! We look forward to refining this LLM hardware planner with the community.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi