OpenAI SDK
Set up Respan
- Sign up — Create an account at platform.respan.ai
- Create an API key — Generate one on the API keys page
- Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Use AI
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
What is OpenAI SDK?
The OpenAI SDK is the official client for OpenAI’s APIs, available for both Python and TypeScript/JavaScript. It supports Chat Completions and the Responses API. Respan can auto-instrument all OpenAI calls for tracing, route them through the Respan gateway for model switching and prompt management, or both.
Example projects
Setup
Set environment variables
The Respan API key authenticates both LLM inference (gateway) and telemetry export (tracing).
View your trace
Open the Traces page to see your auto-instrumented LLM spans.
Always call respan.flush() (Python) or await respan.flush() (TypeScript) before your process exits. Without it, pending spans may be lost.
Configuration
Attributes
Attach customer identifiers, thread IDs, and metadata to spans.
In Respan()
Set defaults at initialization — these apply to all spans.
With propagate_attributes
Override per-request using a context scope.
Decorators (optional)
Decorators are not required. All OpenAI calls are auto-traced by the instrumentor. Use @workflow and @task (Python) or withWorkflow and withTask (TypeScript) to add structure when you want to group related calls into a named workflow with nested tasks.
Streaming
Streaming responses are auto-traced like regular completions.
Tool calls
Function calling is auto-traced. Wrap the workflow with @workflow and @task decorators for a structured trace tree.
Multi-turn conversations
Multi-turn conversations are auto-traced. Each create() call becomes its own span. Use @workflow to group them into a single trace.
Structured output
JSON mode with Pydantic models is auto-traced.
Batch API
The Batch API lets you submit large batches of requests for async processing at 50% cost. Use respan.log_batch_results() to log each batch result as an individual traced span in Respan.
Respan also provides a Batch API endpoint for batch processing with tracking parameters.
The Batch API requires a direct OPENAI_API_KEY. It does not go through the Respan gateway.
Async batch (cross-process)
For long-running batches where submission and retrieval happen in separate processes, save the trace_id and pass it to log_batch_results() later to link results back to the original trace.
Gateway features
The features below require the Gateway or Both setup from Step 3.
Switch models
Change the model parameter to use 250+ models from different providers through the same gateway.
See the full model list.
Prompt management
Use Respan prompt management to serve prompt templates from the platform. Use schema_version: 2 for all new integrations.
Chat Completions
Prompt messages are the base layer (system/context). Body messages are appended as runtime user turns.
Responses API
For the Responses API, pass prompt config under respan_params. The prompt template becomes instructions and body input is preserved.
Prompt options
Respan parameters
Pass additional Respan parameters via extra_body for gateway features.
See Respan parameters for the full list.