Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://docs.respan.ai/mcp"
    }
  }
}

What is LangChain?

LangChain is a framework for building LLM applications with chains, agents, and retrieval-augmented generation. Respan captures every chain step, agent action, and LLM call as spans in a trace.

Setup

1

Install packages

2

Set environment variables

export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize and run

4

View your trace

Open the Traces page to see your chain execution with individual LLM spans.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. LangChainInstrumentor()).
is_auto_instrumentbool | NoneFalseAuto-discover and activate all installed instrumentors via OpenTelemetry entry points.
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

In Respan()

Set defaults at initialization — these apply to all spans.
from respan import Respan
from openinference.instrumentation.langchain import LangChainInstrumentor

respan = Respan(
    instrumentations=[LangChainInstrumentor()],
    customer_identifier="user_123",
    metadata={"service": "langchain-app", "version": "1.0.0"},
)

With propagate_attributes

Override per-request using a context manager.
from respan import Respan, workflow, propagate_attributes
from openinference.instrumentation.langchain import LangChainInstrumentor

respan = Respan(instrumentations=[LangChainInstrumentor()])

@workflow(name="handle_request")
def handle_request(user_id: str, question: str):
    with propagate_attributes(
        customer_identifier=user_id,
        thread_identifier="conv_001",
        metadata={"plan": "pro"},
    ):
        response = chain.invoke({"input": question})
        print(response.content)
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Decorators

Use @workflow and @task to create structured trace hierarchies.
from respan import Respan, workflow, task
from openinference.instrumentation.langchain import LangChainInstrumentor
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

respan = Respan(instrumentations=[LangChainInstrumentor()])

llm = ChatOpenAI(model="gpt-4.1-nano")

@task(name="generate_outline")
def outline(topic: str) -> str:
    prompt = ChatPromptTemplate.from_messages([
        ("system", "Create a brief outline."),
        ("user", "{topic}"),
    ])
    chain = prompt | llm
    return chain.invoke({"topic": topic}).content

@workflow(name="content_pipeline")
def pipeline(topic: str):
    plan = outline(topic)
    prompt = ChatPromptTemplate.from_messages([
        ("system", "Write content from this outline."),
        ("user", "{outline}"),
    ])
    chain = prompt | llm
    result = chain.invoke({"outline": plan})
    print(result.content)

pipeline("Benefits of API gateways")
respan.flush()

Examples

Basic chain

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

llm = ChatOpenAI(model="gpt-4.1-nano")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}"),
])

chain = prompt | llm | StrOutputParser()
result = chain.invoke({"input": "What is the capital of France?"})
print(result)

Agent with tools

from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool

@tool
def get_weather(city: str) -> str:
    """Get the weather for a city."""
    return f"The weather in {city} is sunny, 72F"

llm = ChatOpenAI(model="gpt-4.1-nano")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])

agent = create_tool_calling_agent(llm, [get_weather], prompt)
executor = AgentExecutor(agent=agent, tools=[get_weather])

result = executor.invoke({"input": "What's the weather in San Francisco?"})
print(result["output"])

Gateway

You can route all LLM calls through the Respan gateway by configuring the ChatOpenAI client:
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-4.1-nano",
    api_key=os.getenv("RESPAN_API_KEY"),
    base_url="https://api.respan.ai/api",
)
With the gateway, no OPENAI_API_KEY is needed and you can switch models across providers.