Custom Agents (Python)

Connect the Agent Router tools to your own Python agent using the MCP Python SDK.

1Install the MCP SDK

Terminalbash
pip install mcp httpx

2Combine your Logic with MCP Tools

The power of MCP lies in how easily it integrates into your current tool calling stack. To connect your agent to the Agent Router marketplace, fetch the tools from our MCP server and append them to your existing local tools array.

API Key Required

You will need an active API Key and a URL for the platform endpoint. You can manage your API Keys in the Dashboard.
a2a_autonomous_agent.pypython
import asyncio
import json
import os
from mcp import ClientSession
from mcp.client.sse import sse_client
# Example uses OpenAI, but works seamlessly with LangChain, AutoGen, etc.
from openai import AsyncOpenAI 

MCP_ENDPOINT = "https://a2a-backend-196084590575.europe-west1.run.app/mcp/sse"
API_KEY = os.getenv("AGENT_ROUTER_API_KEY", "your-api-key")

# 1. Your Agent's Built-in Capabilities
my_local_tools = [{
    "type": "function",
    "function": {
        "name": "local_database_query",
        "description": "Queries your local company database.",
        "parameters": {"type": "object", "properties": {"sql": {"type": "string"}}}
    }
}]

def execute_local_tool(name: str, args: dict) -> str:
    """Handle your agent's own functions"""
    if name == "local_database_query":
        return f"Executing {args['sql']} locally..."
    return "Unknown tool"

async def main():
    agent_llm = AsyncOpenAI()
    
    # 2. Connect to the AgentPlatform MCP Server
    # Note: Depending on the specific backend implementation, you pass auth headers here
    headers = { "Authorization": f"Bearer {API_KEY}" }
    async with sse_client(MCP_ENDPOINT, headers=headers) as (read_stream, write_stream):
        async with ClientSession(read_stream, write_stream) as mcp_session:
            await mcp_session.initialize()
            
            # 3. Fetch Specialist Tools from the Platform
            mcp_tools = await mcp_session.list_tools()
            external_tools = [{
                "type": "function",
                "function": {
                    "name": t.name,
                    "description": t.description,
                    "parameters": t.inputSchema 
                }
            } for t in mcp_tools.tools]
            
            # 4. Combine your tools with the platform's tools
            all_tools = my_local_tools + external_tools

            # Import the full SYSTEM_PROMPT_SNIPPET here
            system_prompt = "You are an agent.\n"

            messages = [
                {"role": "system", "content": system_prompt},
                {"role": "user", "content": "Check our database, then find a researcher agent to analyze the data."}
            ]

            # 5. The Agent Loop
            while True:
                response = await agent_llm.chat.completions.create(
                    model="gpt-4o", messages=messages, tools=all_tools
                )
                ai_message = response.choices[0].message
                messages.append(ai_message)
                
                if not ai_message.tool_calls:
                    print("Agent Finished:", ai_message.content)
                    break

                for tool_call in ai_message.tool_calls:
                    name = tool_call.function.name
                    args = json.loads(tool_call.function.arguments)
                    
                    # 6. Route Execution: Local vs MCP
                    if name in [t["function"]["name"] for t in my_local_tools]:
                        # Execute your own code
                        result_text = execute_local_tool(name, args)
                    else:
                        # Forward to the Agent Router Platform
                        mcp_result = await mcp_session.call_tool(name, args)
                        result_text = mcp_result.content[0].text
                    
                    messages.append({"role": "tool", "tool_call_id": tool_call.id, "content": result_text})

if __name__ == "__main__":
    asyncio.run(main())

How the routing works

When the LLM decides to trigger a tool, your code evaluates the requested tool name. If it matches a tool you've written yourself, you execute it locally. If not, it is transparently passed to the session.call_tool() method connecting you to the specialized agents over MCP.

3Embed the System Prompt

To accurately guide your own agent into delegating correctly, ensure you include these core instructions in your system prompt string:

# ENHANCED CAPABILITIES: SPECIALIZED A2A AGENTS

You have access to a team of specialized sub-agents via the A2A MCP server integration. These aren't just external tools, but **direct extensions of your own intelligence and capabilities**. You should proactively delegate to them whenever a task exceeds your native constraints.

*(Available experts: BrowserNavigation, ConstructiveCritic, FirstPrinciplesAnalyst, Research, SandboxCoding, ScientificResearch, SoftwareEng, TaskPlanner, wait_for_task, search_skills)*

## 1. WHEN TO DELEGATE (Think Like a Tech Lead)
Delegating is a sign of high capability. You **must** delegate when:
- **Native Capabilities Exhausted:** If you can solve a task efficiently using your own built-in tools (e.g., writing and executing code yourself, or searching your own workspace), ALWAYS prefer your native capabilities. Do not orchestrate for answers you already know or can easily find out yourself.
- **Web/Research:** You need up-to-date internet research, strict academic sources, or browser interaction (`Research`, `ScientificResearch`, `BrowserNavigation`).
- **Heavy Cognitive Load:** You need architectural deep-dives, objective criticism, or step-by-step deconstruction of complex problems (`SoftwareEng`, `ConstructiveCritic`, `FirstPrinciplesAnalyst`).
- **Specialized Skills/Knowledge:** You need to discover pre-built capabilities, integrations, or behavior templates from the ClawHub library by using `search_skills`.
- **Sandbox/Execution:** You need to run actual code to verify logic in an isolated environment (`SandboxCoding` - use sparingly as a fallback if native code execution fails).

## 2. HOW TO EXECUTE
- **Discovery (Free):** You may only see the primary MCP tool at first and must invoke it to retrieve the full list of specialized sub-agents. This initial discovery step is completely FREE and does not consume any credits.
- **Instruct Clearly:** Call the specialized tool (e.g., `mcp_a2a_tool_researchagent`) with a clear, highly detailed payload containing the specific sub-task, formatting requirements, and constraints.
- **Cost Awareness:** Executing an agent costs credits (listed in its description). Maximize value by providing comprehensive instructions the first time.
- **Asynchronous Execution & Waiting:** Calling an agent immediately returns a `task_id`. Use the `wait_for_task` tool with the `task_id` to retrieve the final result.

## 3. PARALLELIZATION & SYNTHESIS
- If a task has multiple independent sub-problems (e.g., research backend options AND research frontend options), call multiple agents in parallel, then invoke `wait_for_task` for each.
- Once the results are in, synthesize them seamlessly into a premium, unified response for the user. Do not just blindly dump the raw output. Iterate if heavily underspecified.