Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://docs.respan.ai/mcp"
    }
  }
}

What is OpenAI Agents SDK?

The OpenAI Agents SDK (openai-agents) is a lightweight framework for building multi-agent workflows with tools, handoffs, and guardrails. Respan gives you full observability over every agent run, LLM generation, tool call, and handoff.

Setup

1

Install packages

pip install openai-agents respan-ai respan-instrumentation-openai-agents python-dotenv
2

Set environment variables

export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
3

Initialize and run

import os
import asyncio
from dotenv import load_dotenv

load_dotenv()

from respan import Respan
from respan_instrumentation_openai_agents import OpenAIAgentsInstrumentor
from agents import Agent, Runner, trace

respan = Respan(instrumentations=[OpenAIAgentsInstrumentor()])

agent = Agent(
    name="Assistant",
    instructions="You only respond in haikus.",
)

async def main():
    with trace("Hello world"):
        result = await Runner.run(agent, "Tell me about recursion.")
        print(result.final_output)
    respan.flush()

asyncio.run(main())
4

View your trace

Open the Traces page to see your workflow with agent spans, LLM generations, tool calls, and handoffs.
This step applies to Tracing and Both setups. The Gateway-only setup does not produce traces.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. OpenAIAgentsInstrumentor()).
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

In Respan()

Set defaults at initialization — these apply to all spans.
from respan import Respan
from respan_instrumentation_openai_agents import OpenAIAgentsInstrumentor

respan = Respan(
    instrumentations=[OpenAIAgentsInstrumentor()],
    customer_identifier="user_123",
    metadata={"service": "agent-api", "version": "1.0.0"},
)

With propagate_attributes

Override per-request using a context scope.
from respan import Respan, propagate_attributes
from respan_instrumentation_openai_agents import OpenAIAgentsInstrumentor
from agents import Agent, Runner, trace

respan = Respan(instrumentations=[OpenAIAgentsInstrumentor()])

agent = Agent(name="Assistant", instructions="You are a helpful assistant.")

async def handle_request(user_id: str, message: str):
    with trace("User request"):
        with propagate_attributes(
            customer_identifier=user_id,
            thread_identifier="conv_abc_123",
            metadata={"plan": "pro"},
        ):
            result = await Runner.run(agent, message)
            print(result.final_output)
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Decorators (optional)

Decorators are not required. All agent runs, LLM calls, tool calls, and handoffs are auto-traced by the instrumentor. Use @workflow and @task to add structure when you want to group agent runs into named workflows with nested tasks.
from respan import Respan, workflow, task
from respan_instrumentation_openai_agents import OpenAIAgentsInstrumentor
from agents import Agent, Runner, function_tool

respan = Respan(instrumentations=[OpenAIAgentsInstrumentor()])

@function_tool
def search_docs(query: str) -> str:
    """Search the documentation."""
    return f"Results for: {query}"

researcher = Agent(
    name="Researcher",
    instructions="You research topics using the search tool.",
    tools=[search_docs],
)

writer = Agent(
    name="Writer",
    instructions="You write concise summaries.",
)

@task(name="research")
async def research(topic: str) -> str:
    result = await Runner.run(researcher, f"Research: {topic}")
    return result.final_output

@workflow(name="research_and_write")
async def pipeline(topic: str):
    findings = await research(topic)
    result = await Runner.run(writer, f"Summarize: {findings}")
    print(result.final_output)

import asyncio
asyncio.run(pipeline("API gateways"))
respan.flush()

Examples

Tool calls

Tool calls are automatically captured as spans with inputs, outputs, and timing.
from agents import Agent, Runner, function_tool, trace

@function_tool
def get_weather(city: str) -> str:
    """Get the weather for a city."""
    return f"The weather in {city} is sunny, 72F"

agent = Agent(
    name="Weather Agent",
    instructions="Help users check the weather.",
    tools=[get_weather],
)

async def main():
    with trace("Weather check"):
        result = await Runner.run(agent, "What's the weather in San Francisco?")
        print(result.final_output)
    respan.flush()

Handoffs

Agent-to-agent handoffs are traced with full context.
from agents import Agent, Runner, trace

billing_agent = Agent(
    name="Billing Agent",
    instructions="Handle billing questions.",
)

support_agent = Agent(
    name="Support Agent",
    instructions="Route billing questions to the billing agent.",
    handoffs=[billing_agent],
)

async def main():
    with trace("Support handoff"):
        result = await Runner.run(support_agent, "I have a billing question")
        print(result.final_output)

Streaming

Stream agent responses with real-time text deltas.
from openai.types.responses import ResponseTextDeltaEvent
from agents import Agent, Runner

agent = Agent(name="Joker", instructions="You tell jokes.")

result = Runner.run_streamed(agent, input="Tell me 3 jokes.")
async for event in result.stream_events():
    if event.type == "raw_response_event" and isinstance(
        event.data, ResponseTextDeltaEvent
    ):
        print(event.data.delta, end="", flush=True)

Gateway features

The features below require the Gateway or Both setup from Step 3.

Switch models

Change the model parameter on your agents to use 250+ models from different providers through the same gateway.
agent = Agent(
    name="Assistant",
    model="claude-sonnet-4-5-20250929",  # Use Anthropic via gateway
    instructions="You are a helpful assistant.",
)
See the full model list.