LangChain
LangChain is a framework for building applications with language models. It provides chains, agents, retrievers, and integrations across providers. Respan gives you full observability over every chain run, agent step, retriever call, and LLM generation — and gateway routing through the OpenAI-compatible Respan endpoint.
Set up Respan
Create an account at platform.respan.ai and grab an API key. For gateway, also add credits or a provider key.
Run npx @respan/cli setup to set up with your coding agent.
Example projects
Tracing
Gateway
Setup
Set environment variables
OPENAI_API_KEY is used for LLM requests. RESPAN_API_KEY is used to export traces to Respan.
View your trace
Open the Traces page to see your LangChain workflow with chain runs, LLM calls, retriever spans, and tool calls.
Configuration
Attributes
In Respan()
Set defaults at initialization — these apply to all spans.
With propagate_attributes
Override per-request using a context scope.
Decorators (optional)
Decorators are not required. All LangChain chains, agents, retrievers, and LLM calls are auto-traced by the instrumentor. Use @workflow and @task (Python) or withWorkflow and withTask (TypeScript) to add structure when you want to group related runs into a named workflow with nested tasks.
Examples
Chains
Chains are auto-traced as a single workflow with nested LLM and tool spans.
Streaming
Streaming responses are auto-traced like regular calls.