Intelli

mcp
Guvenlik Denetimi
Gecti
Health Gecti
  • License — License: Apache-2.0
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 55 GitHub stars
Code Gecti
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
This framework provides tools for building chatbots and multi-step AI agent workflows. It acts as a unified access layer, allowing developers to easily integrate and switch between various AI models like OpenAI, Anthropic, Gemini, and self-hosted vLLM instances.

Security Assessment
The overall risk is rated as Low. The automated code scan reviewed 12 files and found no dangerous patterns, hardcoded secrets, or requests for dangerous system permissions. Functionally, the code interacts heavily with external networks by making API calls to third-party AI providers (such as OpenAI and Google) based on user-supplied API keys, which is its intended purpose. It does not appear to execute arbitrary shell commands or access local sensitive data outside of these API configurations.

Quality Assessment
The project is actively maintained, with its most recent code push occurring today. It utilizes the permissive Apache-2.0 license, making it highly accessible for both personal and commercial software development. Community trust is currently moderate but growing, indicated by 55 GitHub stars, active PyPI downloads, and the presence of an active Discord server for developer support.

Verdict
Safe to use.
SUMMARY

Build multi-model chatbots and agents from intent.

README.md

Intelli

A framework for creating chatbots and AI agent workflows. It enables seamless integration with multiple AI models, including OpenAI, LLaMA, deepseek, Stable Diffusion, and Mistral, through a unified access layer. Intelli also supports Model Context Protocol (MCP) for standardized interaction with AI models.

Install

# Basic installation
pip install intelli

# With MCP support
pip install "intelli[mcp]"

For detailed usage instructions, refer to the documentation.

Code Examples

Create Chatbot

Switch between multiple chatbot providers without changing your code.

from intelli.function.chatbot import Chatbot, ChatProvider
from intelli.model.input.chatbot_input import ChatModelInput

def call_chatbot(provider, model=None, api_key=None, options=None):
    # prepare common input 
    input = ChatModelInput("You are a helpful assistant.", model)
    input.add_user_message("What is the capital of France?")

    # creating chatbot instance
    chatbot = Chatbot(api_key, provider, options=options)
    response = chatbot.chat(input)

    return response

# call chatGPT (GPT-5 is default)
call_chatbot(ChatProvider.OPENAI) 

# call GPT-4 explicitly
call_chatbot(ChatProvider.OPENAI, "gpt-4o")

# call claude3
call_chatbot(ChatProvider.ANTHROPIC, "claude-3-7-sonnet-20250219")

# call google gemini
call_chatbot(ChatProvider.GEMINI)

# Call NVIDIA Deepseek
call_chatbot(ChatProvider.NVIDIA, "deepseek-ai/deepseek-r1")

# Call vLLM (self-hosted)
call_chatbot(ChatProvider.VLLM, "meta-llama/Llama-3.1-8B-Instruct", options={"baseUrl": "http://localhost:8000"})

Create AI Flows

You can create a flow of tasks executed by different AI models. Here's an example of creating a blog post flow:

from intelli.flow import Agent, Task, SequenceFlow, TextTaskInput, TextProcessor


# define agents
blog_agent = Agent(agent_type='text', provider='openai', mission='write blog posts', model_params={'key': YOUR_OPENAI_API_KEY, 'model': 'gpt-4'})
copy_agent = Agent(agent_type='text', provider='gemini', mission='generate description', model_params={'key': YOUR_GEMINI_API_KEY, 'model': 'gemini'})
artist_agent = Agent(agent_type='image', provider='stability', mission='generate image', model_params={'key': YOUR_STABILITY_API_KEY})

# define tasks
task1 = Task(TextTaskInput('blog post about electric cars'), blog_agent, log=True)
task2 = Task(TextTaskInput('Generate short image description for image model'), copy_agent, pre_process=TextProcessor.text_head, log=True)
task3 = Task(TextTaskInput('Generate cartoon style image'), artist_agent, log=True)

# start sequence flow
flow = SequenceFlow([task1, task2, task3], log=True)
final_result = flow.start()

Graph-Based Agents

To build async flows with multiple paths, refer to the flow tutorial.

Or build the entire flow using natural language with Vibe Agents.
Refer to the documentation for more details.

Generate Images

Use the image controller to generate arts from multiple models with minimum code change:

from intelli.controller.remote_image_model import RemoteImageModel
from intelli.model.input.image_input import ImageModelInput

# model details - change only two words to switch
provider = "openai"
model_name = "dall-e-3"

# prepare the input details
prompts = "cartoonishly-styled solitary snake logo, looping elegantly to form both the body of the python and an abstract play on data nodes."
image_input = ImageModelInput(prompt=prompt, width=1024, height=1024, model=model_name)

# call the model openai/stability
wrapper = RemoteImageModel(your_api_key, provider)
results = wrapper.generate_images(image_input)

GGUF Optimized Models

Llama CPP provides an efficient way to run language models locally with support for models in the new GGUF format, check the docs.

MCP Calculator Demo

Check out the MCP Calculator Demo for sample how to create an MCP server with math operations and a client that uses flow to interpret natural language queries.

MCP DataFrame Demo

Check out the MCP DataFrame Demo for an example of how to serve dataframes as MCP servers and utilize them within Intelli flows, enabling integration with AI models.

Connect Your Docs With Chatbot

IntelliPy allows you to chat with your docs using multiple LLMs. To connect your data, visit the IntelliNode App, start a project using the Document option, upload your documents or images, and copy the generated One Key. This key will be used to connect the chatbot to your uploaded data.

# creating chatbot with the intellinode one key
bot = Chatbot(YOUR_OPENAI_API_KEY, "openai", {"one_key": YOUR_ONE_KEY})

input = ChatModelInput("You are a helpful assistant.")  # uses GPT-5 by default
input.add_user_message("What is the procedure for requesting a refund according to the user manual?")
# optional to returne the searched file name
input.attach_reference = True

response = bot.chat(input)

Repository Setup

  1. Install the requirements.
pip install -r requirements.txt
  1. Rename .example.env to .env and fill the keys.

  2. Run the test cases, examples below.

# images
python3 -m unittest intelli.test.integration.test_remote_image_model

# chatbot
python3 -m unittest intelli.test.integration.test_chatbot

# mistral
python3 -m unittest intelli.test.integration.test_mistralai_wrapper

# ai flows
python3 -m unittest intelli.test.integration.test_flow_sequence

Pillars

  • The wrapper layer provides low-level access to the latest AI models.
  • The controller layer offers a unified input to any AI model by handling the differences.
  • The function layer provides abstract functionality that extends based on the app's use cases.
  • Flows: create a flow of ai agents working toward user tasks.
  • Vibe Agents: generate a graph of agents from an intent.

If you reference Vibe Agents, please cite this repository (see CITATION.cff).

Yorumlar (0)

Sonuc bulunamadi