Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://docs.respan.ai/mcp"
    }
  }
}

What is CrewAI?

CrewAI is a framework for orchestrating role-playing AI agents. It lets you define agents with specific roles, goals, and backstories, then compose them into crews that collaborate on tasks.

Setup

1

Install packages

2

Set environment variables

export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize and run

4

View your trace

Open the Traces page to see your crew execution with individual agent and task spans.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. CrewAIInstrumentor()).
is_auto_instrumentbool | NoneFalseAuto-discover and activate all installed instrumentors via OpenTelemetry entry points.
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

In Respan()

Set defaults at initialization — these apply to all spans.
from respan import Respan
from openinference.instrumentation.crewai import CrewAIInstrumentor

respan = Respan(
    instrumentations=[CrewAIInstrumentor()],
    customer_identifier="user_123",
    metadata={"service": "crewai-app", "version": "1.0.0"},
)

With propagate_attributes

Override per-request using a context manager.
from respan import Respan, workflow, propagate_attributes
from openinference.instrumentation.crewai import CrewAIInstrumentor

respan = Respan(instrumentations=[CrewAIInstrumentor()])

@workflow(name="handle_request")
def handle_request(user_id: str, topic: str):
    with propagate_attributes(
        customer_identifier=user_id,
        thread_identifier="conv_001",
        metadata={"plan": "pro"},
    ):
        result = crew.kickoff()
        print(result)
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Decorators

Use @workflow and @task to create structured trace hierarchies.
from respan import Respan, workflow, task
from openinference.instrumentation.crewai import CrewAIInstrumentor
from crewai import Agent, Task, Crew

respan = Respan(instrumentations=[CrewAIInstrumentor()])

@task(name="run_research_crew")
def run_research(topic: str) -> str:
    researcher = Agent(
        role="Researcher",
        goal=f"Research {topic} thoroughly",
        backstory="You are an expert researcher.",
    )
    research_task = Task(
        description=f"Research {topic} and provide key findings.",
        expected_output="A summary of findings.",
        agent=researcher,
    )
    crew = Crew(agents=[researcher], tasks=[research_task])
    return str(crew.kickoff())

@workflow(name="research_pipeline")
def pipeline(topic: str):
    findings = run_research(topic)
    print(findings)

pipeline("LLM observability best practices")
respan.flush()

Examples

Basic crew

from crewai import Agent, Task, Crew

researcher = Agent(
    role="Researcher",
    goal="Find accurate information",
    backstory="You are an expert researcher.",
)

task = Task(
    description="Explain what API gateways are and their benefits.",
    expected_output="A clear explanation of API gateways.",
    agent=researcher,
)

crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
print(result)

Agents with tasks

from crewai import Agent, Task, Crew

researcher = Agent(
    role="Researcher",
    goal="Find key facts about a topic",
    backstory="You are a meticulous researcher.",
)

writer = Agent(
    role="Writer",
    goal="Write clear and engaging content",
    backstory="You are a skilled technical writer.",
)

research_task = Task(
    description="Research the topic of LLM tracing and observability.",
    expected_output="A list of key points about LLM tracing.",
    agent=researcher,
)

writing_task = Task(
    description="Write a blog post based on the research findings.",
    expected_output="A short blog post about LLM tracing.",
    agent=writer,
)

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
)
result = crew.kickoff()
print(result)