Anthropic SDK

  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.

1{
2 "mcpServers": {
3 "respan-docs": {
4 "url": "https://mcp.respan.ai/mcp/docs"
5 }
6 }
7}

What is Anthropic SDK?

The Anthropic SDK is the official Python client for Anthropic’s Claude models, supporting messages, streaming, and tool use. Respan can auto-instrument all Anthropic calls for tracing, route them through the Respan gateway, or both.

Setup

Install packages

Set environment variables

$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"

No ANTHROPIC_API_KEY needed — the Respan gateway handles provider authentication.

Initialize and run

1

Install packages

2

Set environment variables

$export ANTHROPIC_API_KEY="YOUR_ANTHROPIC_API_KEY"
$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
3

Initialize and run

4

View your trace

Open the Traces page to see your auto-instrumented LLM spans.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. AnthropicInstrumentor()).
is_auto_instrumentbool | NoneFalseAuto-discover and activate all installed instrumentors via OpenTelemetry entry points.
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

Attach customer identifiers, thread IDs, and metadata to spans.

In Respan()

Set defaults at initialization — these apply to all spans.

1from respan import Respan
2from openinference.instrumentation.anthropic import AnthropicInstrumentor
3
4respan = Respan(
5 instrumentations=[AnthropicInstrumentor()],
6 customer_identifier="user_123",
7 metadata={"service": "chat-api", "version": "1.0.0"},
8)

With propagate_attributes

Override per-request using a context manager.

1from respan import Respan, workflow, propagate_attributes
2from openinference.instrumentation.anthropic import AnthropicInstrumentor
3
4respan = Respan(
5 instrumentations=[AnthropicInstrumentor()],
6 metadata={"service": "chat-api", "version": "1.0.0"},
7)
8
9@workflow(name="handle_request")
10def handle_request(user_id: str, question: str):
11 with propagate_attributes(
12 customer_identifier=user_id,
13 thread_identifier="conv_001",
14 metadata={"plan": "pro"}, # merged with default metadata
15 ):
16 message = client.messages.create(
17 model="claude-sonnet-4-5-20250929",
18 max_tokens=1024,
19 messages=[{"role": "user", "content": question}],
20 )
21 print(message.content[0].text)
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Decorators

Use @workflow and @task to create structured trace hierarchies.

1from respan import Respan, workflow, task
2from openinference.instrumentation.anthropic import AnthropicInstrumentor
3from anthropic import Anthropic
4
5respan = Respan(instrumentations=[AnthropicInstrumentor()])
6client = Anthropic()
7
8@task(name="generate_outline")
9def outline(topic: str) -> str:
10 message = client.messages.create(
11 model="claude-sonnet-4-5-20250929",
12 max_tokens=1024,
13 messages=[
14 {"role": "user", "content": f"Create a brief outline about: {topic}"},
15 ],
16 )
17 return message.content[0].text
18
19@workflow(name="content_pipeline")
20def pipeline(topic: str):
21 plan = outline(topic)
22 message = client.messages.create(
23 model="claude-sonnet-4-5-20250929",
24 max_tokens=2048,
25 messages=[
26 {"role": "user", "content": f"Write content from this outline: {plan}"},
27 ],
28 )
29 print(message.content[0].text)
30
31pipeline("Benefits of API gateways")
32respan.flush()

Examples

Basic message

1message = client.messages.create(
2 model="claude-sonnet-4-5-20250929",
3 max_tokens=1024,
4 messages=[{"role": "user", "content": "Say hello in three languages."}],
5)
6print(message.content[0].text)

Streaming

1with client.messages.stream(
2 model="claude-sonnet-4-5-20250929",
3 max_tokens=1024,
4 messages=[{"role": "user", "content": "Write a haiku about Python."}],
5) as stream:
6 for text in stream.text_stream:
7 print(text, end="", flush=True)

Tool calls

1import json
2
3tools = [
4 {
5 "name": "get_weather",
6 "description": "Get the weather for a city.",
7 "input_schema": {
8 "type": "object",
9 "properties": {"city": {"type": "string"}},
10 "required": ["city"],
11 },
12 }
13]
14
15message = client.messages.create(
16 model="claude-sonnet-4-5-20250929",
17 max_tokens=1024,
18 tools=tools,
19 messages=[{"role": "user", "content": "What's the weather in Paris?"}],
20)
21
22if message.stop_reason == "tool_use":
23 tool_block = next(b for b in message.content if b.type == "tool_use")
24 result = f"Sunny, 72F in {tool_block.input['city']}"
25
26 final = client.messages.create(
27 model="claude-sonnet-4-5-20250929",
28 max_tokens=1024,
29 tools=tools,
30 messages=[
31 {"role": "user", "content": "What's the weather in Paris?"},
32 {"role": "assistant", "content": message.content},
33 {"role": "user", "content": [
34 {"type": "tool_result", "tool_use_id": tool_block.id, "content": result}
35 ]},
36 ],
37 )
38 print(final.content[0].text)

Gateway features

The features below require the gateway setup from the expandable section above.

Switch models

Change the model parameter to use different Claude models through the same gateway.

1# Claude Sonnet
2message = client.messages.create(model="claude-sonnet-4-5-20250929", max_tokens=1024, messages=messages)
3
4# Claude Haiku
5message = client.messages.create(model="claude-3-5-haiku-20241022", max_tokens=1024, messages=messages)
6
7# Claude Opus
8message = client.messages.create(model="claude-3-opus-20240229", max_tokens=1024, messages=messages)

See the full model list.

Respan parameters

Pass additional Respan parameters via metadata under the respan_params key for gateway features.

1message = client.messages.create(
2 model="claude-sonnet-4-5-20250929",
3 max_tokens=1024,
4 messages=[{"role": "user", "content": "Hello"}],
5 metadata={
6 "respan_params": {
7 "customer_identifier": "user_123",
8 "metadata": {"session_id": "abc123"},
9 "thread_identifier": "conversation_456",
10 }
11 },
12)

See Respan parameters for the full list.