LiteLLM
LiteLLM provides a unified Python interface for calling 100+ LLM providers using the OpenAI format. Respan gives you full observability over every LiteLLM completion across providers — and gateway routing through the OpenAI-compatible Respan endpoint.
Set up Respan
Create an account at platform.respan.ai and grab an API key. For gateway, also add credits or a provider key.
Run npx @respan/cli setup to set up with your coding agent.
Example projects
Tracing
Gateway
Setup
Set environment variables
OPENAI_API_KEY (or any provider key) is used for LLM requests. RESPAN_API_KEY is used to export traces to Respan.
Initialize and run
Register the Respan callback to log all completions automatically. Requests go directly to providers; the logs are sent to Respan.
View your trace
Open the Traces page to see your LiteLLM completions across providers as auto-traced spans.
Configuration
See the LiteLLM Exporter SDK reference for the full API.
Attributes
Pass Respan parameters inside metadata.respan_params on each call.
Async usage
The callback supports async completions automatically.
Multiple providers
LiteLLM’s unified interface means all providers are logged with the same callback.