The ACP SDK allows you to wrap an existing agent, regardless of its framework or programming language, into a reusable and interoperable service. By implementing a simple interface, your agent becomes compatible with the ACP protocol and can communicate over HTTP, interact with other agents in workflows, and exchange structured messages using a shared format.

Once wrapped, your agent becomes:

  • Remotely callable over REST APIs
  • Composable in workflows with other agents
  • Discoverable by other systems
  • Reusable without changing its internal logic

Simple agent

Install the SDK first: uv add acp-sdk

Wrap an agent by annotating a Python function with @server.agent(). The agent name comes from the function name, and the description comes from the docstring. You can add more metadata like capabilities, dependencies, and content types - see the Agent Manifest section for details.

This creates an ACP-compliant agent that can receive messages and respond via HTTP using the ACP protocol:

echo.py
from collections.abc import AsyncGenerator
from acp_sdk.models import Message
from acp_sdk.server import Context, RunYield, RunYieldResume, Server

# Create a new ACP server instance
server = Server()


@server.agent()
async def echo(input: list[Message], context: Context) -> AsyncGenerator[RunYield, RunYieldResume]:
    """Echoes everything"""
    for message in input:
        yield message

# Start the ACP server
server.run()

What happens here:

  • The @server.agent() decorator registers your function as an ACP agent
  • inputs contains the messages sent to your agent
  • context provides request metadata and utilities
  • yield statements send responses back to the caller
  • The server automatically handles HTTP routing and message serialization

Simple LLM agent

Here’s a more sophisticated example that wraps an LLM agent with memory and tools using the beeai-framework:

llm.py
from collections.abc import AsyncGenerator

from acp_sdk.models import Message, MessagePart
from acp_sdk.server import RunYield, RunYieldResume, Server
from beeai_framework.agents.react import ReActAgent
from beeai_framework.backend.chat import ChatModel
from beeai_framework.backend.message import UserMessage
from beeai_framework.memory.token_memory import TokenMemory

server = Server()


@server.agent()
async def llm(inputs: list[Message]) -> AsyncGenerator[RunYield, RunYieldResume]:
    """LLM agent that processes inputs and returns a response"""

    # Create a llm instance
    llm = ChatModel.from_name("ollama:llama3.1")

    # Create a memory instance
    memory = TokenMemory(llm)

    # Add messages to memory
    for message in inputs:
        await memory.add(UserMessage(str(message)))

    # Create agent with memory and tools
    agent = ReActAgent(llm=llm, tools=[], memory=memory)

    # Run the agent with the memory
    response = await agent.run()

    # Yield the response
    yield MessagePart(content=response.result.text)


server.run()