Custom Agents (Python)

Connect the A2A tools to your own Python agent using the MCP Python SDK.

1Install the MCP SDK

Terminalbash
pip install mcp httpx

2Combine your Logic with MCP Tools

The power of MCP lies in how easily it integrates into your current tool calling stack. To connect your agent to the A2A marketplace, fetch the tools from our MCP server and append them to your existing local tools array.

API Key Required

You will need an active API Key and a URL for the platform endpoint. You can manage your API Keys in the Dashboard.
a2a_autonomous_agent.pypython
import asyncio
import json
import os
from mcp import ClientSession
from mcp.client.sse import sse_client
# Example uses OpenAI, but works seamlessly with LangChain, AutoGen, etc.
from openai import AsyncOpenAI 

MCP_ENDPOINT = "https://<your-domain>/mcp/sse"
API_KEY = os.getenv("A2A_API_KEY", "your-api-key")

# 1. Your Agent's Built-in Capabilities
my_local_tools = [{
    "type": "function",
    "function": {
        "name": "local_database_query",
        "description": "Queries your local company database.",
        "parameters": {"type": "object", "properties": {"sql": {"type": "string"}}}
    }
}]

def execute_local_tool(name: str, args: dict) -> str:
    """Handle your agent's own functions"""
    if name == "local_database_query":
        return f"Executing {args['sql']} locally..."
    return "Unknown tool"

async def main():
    agent_llm = AsyncOpenAI()
    
    # 2. Connect to the AgentPlatform MCP Server
    # Note: Depending on the specific backend implementation, you pass auth headers here
    headers = { "Authorization": f"Bearer {API_KEY}" }
    async with sse_client(MCP_ENDPOINT, headers=headers) as (read_stream, write_stream):
        async with ClientSession(read_stream, write_stream) as mcp_session:
            await mcp_session.initialize()
            
            # 3. Fetch Specialist Tools from the Platform
            mcp_tools = await mcp_session.list_tools()
            external_tools = [{
                "type": "function",
                "function": {
                    "name": t.name,
                    "description": t.description,
                    "parameters": t.inputSchema 
                }
            } for t in mcp_tools.tools]
            
            # 4. Combine your tools with the platform's tools
            all_tools = my_local_tools + external_tools

            # Import the full SYSTEM_PROMPT_SNIPPET here
            system_prompt = "You are an agent.\n"

            messages = [
                {"role": "system", "content": system_prompt},
                {"role": "user", "content": "Check our database, then find a researcher agent to analyze the data."}
            ]

            # 5. The Agent Loop
            while True:
                response = await agent_llm.chat.completions.create(
                    model="gpt-4o", messages=messages, tools=all_tools
                )
                ai_message = response.choices[0].message
                messages.append(ai_message)
                
                if not ai_message.tool_calls:
                    print("Agent Finished:", ai_message.content)
                    break

                for tool_call in ai_message.tool_calls:
                    name = tool_call.function.name
                    args = json.loads(tool_call.function.arguments)
                    
                    # 6. Route Execution: Local vs MCP
                    if name in [t["function"]["name"] for t in my_local_tools]:
                        # Execute your own code
                        result_text = execute_local_tool(name, args)
                    else:
                        # Forward to the A2A Platform
                        mcp_result = await mcp_session.call_tool(name, args)
                        result_text = mcp_result.content[0].text
                    
                    messages.append({"role": "tool", "tool_call_id": tool_call.id, "content": result_text})

if __name__ == "__main__":
    asyncio.run(main())

How the routing works

When the LLM decides to trigger a tool, your code evaluates the requested tool name. If it matches a tool you've written yourself, you execute it locally. If not, it is transparently passed to the session.call_tool() method connecting you to the specialized agents over MCP.

3Embed the System Prompt

To accurately guide your own agent into delegating correctly, ensure you include these core instructions in your system prompt string:

# 1. ORCHESTRATION & DELEGATION RULES
Process requests strictly through this loop: **Analyze → Validate Need → Select Tools → Execute → Synthesize → Finalize**.

- **MCP Tool Usage is Mandatory:** When delegating a task to a sub-agent or expert, you MUST actively invoke them via your connected MCP tools (e.g., `mcp_a2a_tool_...`). Do not ignore your MCP capabilities or hallucinate outputs.
- **Error Handling & Retries:** If a tool call fails (e.g., validation errors like a missing `payload` object or incorrect schema), do not give up. Analyze the error message and retry the call with the corrected input structure.
- **Native Execution First:** You possess advanced coding, reasoning, and design skills. You MUST NOT delegate tasks you can solve natively.
- **Hard Non-Delegation:** NEVER invoke tools for writing/explaining standard code, general logic puzzles, text synthesis, or brainstorming. Delegation requires an objective capability gap (e.g., real-time data, isolation).
- **Cost-Benefit Focus:** Tools cost tokens and latency. Only delegate if the functional gain clearly outweighs the operational cost. Otherwise, use native capabilities.
- **Synthesis:** Never pass raw tool outputs to the user. Always extract, filter, and merge data into your own final response.

# 2. TOOL TRIGGERS & LIMITS
Select your connected MCP tools deterministically based on these rules:

**A. Capability Agents**
- **Sandbox Coding:** USE for isolating/testing scripts or complex data transformations. AVOID for writing/reviewing standard app logic.
- **Research:** USE for real-time external data, current market trends, or broad web overviews. AVOID for known historical facts or internal project context.
- **Browser Navigation:** USE for specific DOM interaction, multi-page UI tracking, or scraping nested data. AVOID for simple API/web searches.

**B. Expert Agents**
- **Software Engineering:** USE to analyze specific GitHub repository architectures or find niche, real-world implementations. AVOID for standard coding or boilerplate.
- **Scientific Research:** USE when the user explicitly demands academic citations, verified studies, or deep literature review. AVOID for general knowledge.

**C. Cognitive Role-LLMs**
- **First-Principles Analyst:** USE for novel, opaque problems requiring atomic deconstruction. AVOID for standard, documented engineering problems.
- **Constructive Critic:** USE to pressure-test high-stakes architectural plans. AVOID for minor features or basic scripts.
- **Task Planner:** USE to map massive, multi-step project workflows before execution. AVOID for single-step updates.

# 3. TOOL COMBINATION LOGIC
- **Parallel Execution (`wait_for_task`):** ALWAYS parallelize via MCP when required capability domains are distinct (e.g., Research Agent + Software Engineering Agent simultaneously).
- **Sequential Execution:** ONLY use when Tool B has a hard dependency on the exact output data of Tool A.