Trace your first call

Get your first trace in Respan in 5 minutes.

Introduction

Tracing captures every LLM call, tool execution, and agent step in your application and sends it to the Respan platform. The Respan SDK instruments your code, automatically records inputs, outputs, latency, cost, and token usage, and exports the data as spans. Spans form a tree view of your workflow so you can see exactly how a request flowed through your agents and tools.


1. Set up your account

Sign up at platform.respan.ai and create an API key on the API keys page.

Set your environment variable:

$export RESPAN_API_KEY="your-api-key"

Try it live. Add your API key below, ask the chatbot a question, and watch a trace appear in your dashboard.


2. Choose your integration

Pick the integration that matches your stack, install the packages, and initialize Respan. Agent SDKs like Claude Agent SDK and Vercel AI SDK automatically group all nested LLM calls into a single trace. The OpenAI SDK traces each call independently. To group multiple calls into one trace, wrap them in a @workflow (Python) or withWorkflow() (TypeScript). We will cover decorators in more detail in the Traces section.

$pip install respan-ai respan-instrumentation-openai openai
1from openai import OpenAI
2from respan import Respan
3from respan_instrumentation_openai import OpenAIInstrumentor
4
5respan = Respan(instrumentations=[OpenAIInstrumentor()])
6
7client = OpenAI()
8
9response = client.chat.completions.create(
10 model="gpt-4.1-nano",
11 messages=[{"role": "user", "content": "Say hello in three languages."}],
12)
13print(response.choices[0].message.content)
14respan.flush()

See the full OpenAI SDK integration guide for streaming, function calling, and more.


3. See your first trace

Open the Traces page in the Respan dashboard. You should see your trace appear within a few seconds.

Agent tracing visualization

4. Set up the gateway

Route your LLM traffic through Respan’s AI gateway to get automatic logging, fallbacks, retries, load balancing, and caching across 250+ models. Point your SDK at Respan’s base URL and use your Respan API key.

Either add credits or add your LLM provider key so Respan can call models on your behalf.

1from openai import OpenAI
2
3client = OpenAI(
4 base_url="https://api.respan.ai/api/",
5 api_key="YOUR_RESPAN_API_KEY",
6)
7
8response = client.chat.completions.create(
9 model="gpt-4.1-nano",
10 messages=[{"role": "user", "content": "Say hello in three languages."}],
11)
12print(response.choices[0].message.content)

Change the model parameter to use any supported provider through the same endpoint. See the full Gateway guide for advanced configuration.


What’s next