auto_ai_router
skill
Basarisiz
Health Uyari
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Low visibility — Only 9 GitHub stars
Code Basarisiz
- Hardcoded secret — Potential hardcoded credential in examples/example_image_chat_openai.py
- Hardcoded secret — Potential hardcoded credential in examples/example_image_openai.py
- Hardcoded secret — Potential hardcoded credential in examples/example_responses.py
- Hardcoded secret — Potential hardcoded credential in examples/example_schema.py
- Hardcoded secret — Potential hardcoded credential in examples/example_simple_chat.py
- Hardcoded secret — Potential hardcoded credential in examples/example_video_chat.py
Permissions Gecti
- Permissions — No dangerous permissions requested
Purpose
This tool is a high-performance proxy router designed for Large Language Model (LLM) APIs. It provides automatic load balancing across multiple providers (like OpenAI, Anthropic, and Gemini), rate limiting, and fail2ban protection to manage traffic and API credentials.
Security Assessment
As an API proxy, the tool inherently processes highly sensitive data, including all prompt payloads and routing credentials. By design, it makes continuous external network requests to various LLM providers. The automated scanner flagged several instances of hardcoded credentials across multiple example files (e.g., `example_simple_chat.py`, `example_image_openai.py`). While these are isolated to sample code rather than the core Go application, it indicates poor secret hygiene in the repository. Furthermore, the tool is designed to read API keys via environment variables, meaning its operational environment must be strictly secured. Due to the sensitive nature of the data it handles and the flagged example files, the overall security risk is rated as Medium.
Quality Assessment
The project is actively maintained, with its last code push occurring today. It is properly licensed under the permissive Apache-2.0 license and does not request any dangerous system permissions. However, community trust and visibility are currently very low, as evidenced by only having 9 GitHub stars. Developers should expect limited community support or external auditing for this project.
Verdict
Use with caution — the core tool appears functional and actively maintained, but it should be strictly sandboxed due to its access to sensitive API keys, and developers must ensure they do not accidentally adopt the insecure hardcoded secrets found in the repository's example files.
This tool is a high-performance proxy router designed for Large Language Model (LLM) APIs. It provides automatic load balancing across multiple providers (like OpenAI, Anthropic, and Gemini), rate limiting, and fail2ban protection to manage traffic and API credentials.
Security Assessment
As an API proxy, the tool inherently processes highly sensitive data, including all prompt payloads and routing credentials. By design, it makes continuous external network requests to various LLM providers. The automated scanner flagged several instances of hardcoded credentials across multiple example files (e.g., `example_simple_chat.py`, `example_image_openai.py`). While these are isolated to sample code rather than the core Go application, it indicates poor secret hygiene in the repository. Furthermore, the tool is designed to read API keys via environment variables, meaning its operational environment must be strictly secured. Due to the sensitive nature of the data it handles and the flagged example files, the overall security risk is rated as Medium.
Quality Assessment
The project is actively maintained, with its last code push occurring today. It is properly licensed under the permissive Apache-2.0 license and does not request any dangerous system permissions. However, community trust and visibility are currently very low, as evidenced by only having 9 GitHub stars. Developers should expect limited community support or external auditing for this project.
Verdict
Use with caution — the core tool appears functional and actively maintained, but it should be strictly sandboxed due to its access to sensitive API keys, and developers must ensure they do not accidentally adopt the insecure hardcoded secrets found in the repository's example files.
High-performance proxy router for LLM APIs with automatic load balancing, rate limiting, and fail2ban protection
README.md
Auto AI Router
High-performance proxy router for LLM APIs with automatic load balancing, rate limiting, and fail2ban protection. Routes requests to OpenAI, Vertex AI, Gemini AI Studio, Anthropic, and other Auto AI Router instances.
Key Features
- Multi-provider support — OpenAI, Vertex AI, Gemini, Anthropic, Proxy chains
- Round-robin load balancing — across multiple credentials per model
- Rate limiting — per-credential and per-model RPM/TPM controls
- Fail2ban — automatic provider banning on repeated errors
- Prometheus metrics — request counts, latency, credential status
- LiteLLM DB integration — spend logging and API key authentication
- Streaming — full SSE support for all providers
- Environment variables — secure credential management via
os.environ/VAR_NAME
Quick Start
# Build
git clone https://github.com/MiXaiLL76/auto_ai_router.git
cd auto_ai_router
go build -o auto_ai_router ./cmd/server/
# Run
./auto_ai_router -config config.yaml
Or with Docker:
docker pull ghcr.io/mixaill76/auto_ai_router:latest
docker run -p 8080:8080 -v $(pwd)/config.yaml:/app/config.yaml ghcr.io/mixaill76/auto_ai_router:latest
Documentation
Full documentation is available at mixaill76.github.io/auto_ai_router.
License
Apache License 2.0 — see LICENSE file.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi