LlamaIndex
LlamaIndex is a framework for building LLM applications with your own data. It provides indexes, query engines, retrievers, and agents for retrieval-augmented generation. Respan gives you full observability over every query, retrieval, agent step, and LLM call — and gateway routing through the OpenAI-compatible Respan endpoint.
Set up Respan
Create an account at platform.respan.ai and grab an API key. For gateway, also add credits or a provider key.
Run npx @respan/cli setup to set up with your coding agent.
Example projects
Tracing
Gateway
Setup
Set environment variables
OPENAI_API_KEY is used for LLM requests. RESPAN_API_KEY is used to export traces to Respan.
View your trace
Open the Traces page to see your LlamaIndex workflow with query spans, retrievers, agent steps, and LLM calls.
Configuration
Attributes
In Respan()
Set defaults at initialization — these apply to all spans.
With propagate_attributes
Override per-request using a context scope.
Decorators (optional)
Decorators are not required. All LlamaIndex query engines, retrievers, agents, and LLM calls are auto-traced by the instrumentor. Use @workflow and @task to add structure when you want to group related runs into a named workflow with nested tasks.
Examples
Query engine
Query engines are auto-traced as a single workflow with nested retriever and LLM spans.