Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://docs.respan.ai/mcp"
    }
  }
}

What is Mistral AI?

The Mistral AI SDK is the official Python client for Mistral’s models, supporting chat completions, streaming, and function calling. Respan can auto-instrument all Mistral calls for tracing, route them through the Respan gateway, or both.

Setup

1

Install packages

2

Set environment variables

export MISTRAL_API_KEY="YOUR_MISTRAL_API_KEY"
export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
3

Initialize and run

4

View your trace

Open the Traces page to see your auto-instrumented LLM spans.
This step applies to Tracing and Both setups. The Gateway-only setup does not produce traces.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. MistralAIInstrumentor()).
is_auto_instrumentbool | NoneFalseAuto-discover and activate all installed instrumentors via OpenTelemetry entry points.
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

Attach customer identifiers, thread IDs, and metadata to spans.

In Respan()

Set defaults at initialization — these apply to all spans.
from respan import Respan
from openinference.instrumentation.mistralai import MistralAIInstrumentor

respan = Respan(
    instrumentations=[MistralAIInstrumentor()],
    customer_identifier="user_123",
    metadata={"service": "chat-api", "version": "1.0.0"},
)

With propagate_attributes

Override per-request using a context manager.
from respan import Respan, workflow, propagate_attributes
from openinference.instrumentation.mistralai import MistralAIInstrumentor
from mistralai import Mistral

respan = Respan(
    instrumentations=[MistralAIInstrumentor()],
    metadata={"service": "chat-api", "version": "1.0.0"},
)
client = Mistral(api_key=os.getenv("MISTRAL_API_KEY"))

@workflow(name="handle_request")
def handle_request(user_id: str, question: str):
    with propagate_attributes(
        customer_identifier=user_id,
        thread_identifier="conv_001",
        metadata={"plan": "pro"},  # merged with default metadata
    ):
        response = client.chat.complete(
            model="mistral-large-latest",
            messages=[{"role": "user", "content": question}],
        )
        print(response.choices[0].message.content)
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Decorators

Use @workflow and @task to create structured trace hierarchies.
from respan import Respan, workflow, task
from openinference.instrumentation.mistralai import MistralAIInstrumentor
from mistralai import Mistral

respan = Respan(instrumentations=[MistralAIInstrumentor()])
client = Mistral(api_key=os.getenv("MISTRAL_API_KEY"))

@task(name="generate_outline")
def outline(topic: str) -> str:
    response = client.chat.complete(
        model="mistral-large-latest",
        messages=[
            {"role": "user", "content": f"Create a brief outline about: {topic}"},
        ],
    )
    return response.choices[0].message.content

@workflow(name="content_pipeline")
def pipeline(topic: str):
    plan = outline(topic)
    response = client.chat.complete(
        model="mistral-large-latest",
        messages=[
            {"role": "user", "content": f"Write content from this outline: {plan}"},
        ],
    )
    print(response.choices[0].message.content)

pipeline("Benefits of API gateways")
respan.flush()

Examples

Basic chat

response = client.chat.complete(
    model="mistral-large-latest",
    messages=[{"role": "user", "content": "Say hello in three languages."}],
)
print(response.choices[0].message.content)

Streaming

stream = client.chat.stream(
    model="mistral-large-latest",
    messages=[{"role": "user", "content": "Write a haiku about Python."}],
)
for chunk in stream:
    content = chunk.data.choices[0].delta.content
    if content:
        print(content, end="", flush=True)

Gateway features

The features below require the Gateway or Both setup from Step 3.

Switch models

Change the model parameter to use 250+ models from different providers through the same gateway.
# Mistral
response = client.chat.completions.create(model="mistral-large-latest", messages=messages)

# OpenAI
response = client.chat.completions.create(model="gpt-4.1-nano", messages=messages)

# Anthropic
response = client.chat.completions.create(model="claude-sonnet-4-5-20250929", messages=messages)
See the full model list.

Respan parameters

Pass additional Respan parameters via extra_body for gateway features.
response = client.chat.completions.create(
    model="mistral-large-latest",
    messages=[{"role": "user", "content": "Hello"}],
    extra_body={
        "customer_identifier": "user_123",
        "fallback_models": ["gpt-4.1-nano"],
        "metadata": {"session_id": "abc123"},
        "thread_identifier": "conversation_456",
    },
)
See Respan parameters for the full list.