AgentX-Python

mcp
Guvenlik Denetimi
Gecti
Health Gecti
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 46 GitHub stars
Code Gecti
  • Code scan — Scanned 10 files during light audit, no dangerous patterns found
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose
This Python SDK enables developers to build and orchestrate multi-agent AI workforces. It acts as a client wrapper to interact with the AgentX platform via their API.

Security Assessment
The tool functions by making external network requests to the AgentX API servers to process chats and manage AI agents. It does not appear to execute local shell commands or access unauthorized local files. No hardcoded secrets or dangerous permission requests were found during the code scan. Users must provide an API key, which can be safely loaded from a local environment variable rather than being typed directly into the script. Overall risk is rated as Low.

Quality Assessment
The project is actively maintained, with its most recent code push happening today. It is distributed under the standard MIT license, making it highly accessible for most projects. While the repository currently has a modest community footprint with 46 stars, the automated code scan found no dangerous patterns across its files.

Verdict
Safe to use — It is an active, permissively licensed SDK with low security risks, though users should be aware that all interactions rely on an external cloud service.
SUMMARY

AgentX python SDK. Build multi-agent AI workforce.

README.md

Logo

PyPI version


a fast way to build AI Agents and create agent workforce

The official AgentX Python SDK for AgentX

Why build AI agent with AgentX?

  • Simplicity, Agent - Conversation - Message structure.
  • Include chain-of-thoughts.
  • Choose from most open and closed sourced LLM vendors.
  • Built-in Voice(ASR, TTS), Image Gen, Document, CSV/excel tool, OCR, etc.
  • Support all running MCP (model context protocol).
  • Support RAG with built-in re-rank.
  • Multi-agent workforce orchestration.
  • Multiple agents working together with a designated manager agent.
  • Cross LLM vendor, multi-agent orchestration.
  • A2A - agent to agent protocol (coming soon)

Installation

pip install --upgrade agentx-python

Quick Start

Get started with AgentX in just a few lines of code:

from agentx import AgentX

# Initialize the client
client = AgentX(api_key="your-api-key-here")

# Get your agents
agents = client.list_agents()
print(f"You have {len(agents)} agents")

# Start chatting with your first agent
if agents:
    agent = agents[0]
    conversation = agent.new_conversation()
    response = conversation.chat("Hello! What can you help me with?")
    print(response)

Usage

Provide an api_key inline or set AGENTX_API_KEY as an environment variable.
You can get an API key from https://app.agentx.so

Agent

from agentx import AgentX

client = AgentX(api_key="<your api key here>")

# Get the list of agents you have
print(client.list_agents())

Conversation

Each Conversation has agents and users tied to it.

# get agent
my_agent = client.get_agent(id="<agent id here>")

# Get the list of conversation from this agent
existing_conversations = my_agent.list_conversations()
print(existing_conversations)

# Get the list of history messages from a conversation
last_conversation = existing_conversations[-1]
msgs = last_conversation.list_messages()
print(msgs)

Chat

A chat needs to happen in the conversation. You can do stream response too, default False.

a_conversation = my_agent.get_conversation(id="<conversation id here>")

response = a_conversation.chat_stream("Hello, what is your name?")
for chunk in response:
    print(chunk)

output looks like:

text=None cot='The user is greeting and asking for my ' botId='xxx'
text=None cot='name, which are casual, straightforward questions.' botId='xxx'
text=None cot=' I can answer these directly' botId='xxx'
text='Hello' cot=None botId='xxx'
text='!' cot=None botId='xxx'
text=' I' cot=None botId='xxx'
text=' am' cot=None botId='xxx'
text=' AgentX' cot=None botId='xxx'
text=None cot=None botId='xxx'

*cot stands for chain-of-thoughts

Workforce

A Workforce (team) consists of multiple agents working together with a designated manager agent.

from agentx import AgentX

client = AgentX(api_key="<your api key here>")

# Get the list of workforces/teams you have
workforces = client.list_workforces()
print(workforces)

# Get a specific workforce
workforce = workforces[0]  # or any specific workforce
print(f"Workforce: {workforce.name}")
print(f"Manager: {workforce.manager.name}")
print(f"Agents: {[agent.name for agent in workforce.agents]}")

Workforce Conversations

# Create a new conversation with the workforce
conversation = workforce.new_conversation()

# List all existing conversations for the workforce
conversations = workforce.list_conversations()
print(conversations)

Chat with Workforce

Chat with the entire workforce team and get streaming responses from all agents.

# Stream chat with the workforce
response = workforce.chat_stream(conversation.id, "How can you help me with this project?")
for chunk in response:
    if chunk.text:
        print(chunk.text, end="")
    if chunk.cot:
        print(f" [COT: {chunk.cot}]")

The workforce chat allows you to leverage multiple specialized agents working together to provide comprehensive responses to your queries.

Yorumlar (0)

Sonuc bulunamadi