Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://docs.respan.ai/mcp"
    }
  }
}

What is Smolagents?

Smolagents is Hugging Face’s lightweight agent framework. It provides a simple API for building agents that can use tools, write code, and interact with various LLM backends.

Setup

1

Install packages

pip install smolagents respan-ai openinference-instrumentation-smolagents python-dotenv
2

Set environment variables

export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize and run

import os
from dotenv import load_dotenv

load_dotenv()

from respan import Respan
from openinference.instrumentation.smolagents import SmolagentsInstrumentor
from smolagents import CodeAgent, LiteLLMModel, tool

# Initialize Respan with Smolagents instrumentation
respan = Respan(instrumentations=[SmolagentsInstrumentor()])

model = LiteLLMModel(model_id="gpt-4.1-nano")

agent = CodeAgent(
    tools=[],
    model=model,
)

result = agent.run("What is the capital of France?")
print(result)
respan.flush()
4

View your trace

Open the Traces page to see your agent trace with LLM calls and tool executions.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. SmolagentsInstrumentor()).
is_auto_instrumentbool | NoneFalseAuto-discover and activate all installed instrumentors via OpenTelemetry entry points.
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

In Respan()

Set defaults at initialization — these apply to all spans.
from respan import Respan
from openinference.instrumentation.smolagents import SmolagentsInstrumentor

respan = Respan(
    instrumentations=[SmolagentsInstrumentor()],
    customer_identifier="user_123",
    metadata={"service": "smolagents-app", "version": "1.0.0"},
)

With propagate_attributes

Override per-request using a context manager.
from respan import Respan, workflow, propagate_attributes
from openinference.instrumentation.smolagents import SmolagentsInstrumentor

respan = Respan(instrumentations=[SmolagentsInstrumentor()])

@workflow(name="handle_request")
def handle_request(user_id: str, question: str):
    with propagate_attributes(
        customer_identifier=user_id,
        thread_identifier="conv_001",
        metadata={"plan": "pro"},
    ):
        result = agent.run(question)
        print(result)
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Decorators

Use @workflow and @task to create structured trace hierarchies.
from respan import Respan, workflow, task
from openinference.instrumentation.smolagents import SmolagentsInstrumentor
from smolagents import CodeAgent, LiteLLMModel

respan = Respan(instrumentations=[SmolagentsInstrumentor()])

model = LiteLLMModel(model_id="gpt-4.1-nano")

@task(name="run_agent")
def run_agent(question: str) -> str:
    agent = CodeAgent(tools=[], model=model)
    return str(agent.run(question))

@workflow(name="qa_pipeline")
def pipeline(question: str):
    answer = run_agent(question)
    print(answer)

pipeline("What are the benefits of LLM observability?")
respan.flush()

Examples

Basic agent with tool

from smolagents import CodeAgent, LiteLLMModel, tool

@tool
def get_weather(city: str) -> str:
    """Get the weather for a city.

    Args:
        city: The name of the city.

    Returns:
        A string with the weather information.
    """
    return f"The weather in {city} is sunny, 72F"

model = LiteLLMModel(model_id="gpt-4.1-nano")

agent = CodeAgent(
    tools=[get_weather],
    model=model,
)

result = agent.run("What's the weather in San Francisco?")
print(result)