Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://docs.respan.ai/mcp"
    }
  }
}

What is Portkey?

Portkey is an AI gateway and observability platform that provides a unified interface to multiple LLM providers. Respan can instrument all Portkey calls for tracing and observability, giving you end-to-end visibility.
Portkey is itself a gateway, so only Tracing setup is available. Respan gateway routing is not directly applicable.

Setup

1

Install packages

pip install respan-ai openinference-instrumentation-portkey portkey-ai python-dotenv
2

Set environment variables

export PORTKEY_API_KEY="YOUR_PORTKEY_API_KEY"
export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
3

Initialize and run

import os
from dotenv import load_dotenv

load_dotenv()

from portkey_ai import Portkey
from respan import Respan
from openinference.instrumentation.portkey import PortkeyInstrumentor

# Initialize Respan with Portkey instrumentation
respan = Respan(instrumentations=[PortkeyInstrumentor()])

# Initialize the Portkey client
client = Portkey(api_key=os.getenv("PORTKEY_API_KEY"))

# Calls go through Portkey, auto-traced by Respan
response = client.chat.completions.create(
    model="gpt-4.1-nano",
    messages=[{"role": "user", "content": "Say hello in three languages."}],
)
print(response.choices[0].message.content)
respan.flush()
4

View your trace

Open the Traces page to see your auto-instrumented LLM spans.

Configuration

ParameterTypeDefaultDescription
api_keystr | NoneNoneFalls back to RESPAN_API_KEY env var.
base_urlstr | NoneNoneFalls back to RESPAN_BASE_URL env var.
instrumentationslist[]Plugin instrumentations to activate (e.g. PortkeyInstrumentor()).
customer_identifierstr | NoneNoneDefault customer identifier for all spans.
metadatadict | NoneNoneDefault metadata attached to all spans.
environmentstr | NoneNoneEnvironment tag (e.g. "production").

Attributes

Attach customer identifiers, thread IDs, and metadata to spans.

In Respan()

Set defaults at initialization — these apply to all spans.
from respan import Respan
from openinference.instrumentation.portkey import PortkeyInstrumentor

respan = Respan(
    instrumentations=[PortkeyInstrumentor()],
    customer_identifier="user_123",
    metadata={"service": "chat-api", "version": "1.0.0"},
)

With propagate_attributes

Override per-request using a context manager.
from portkey_ai import Portkey
from respan import Respan, workflow, propagate_attributes
from openinference.instrumentation.portkey import PortkeyInstrumentor

respan = Respan(
    instrumentations=[PortkeyInstrumentor()],
    metadata={"service": "chat-api", "version": "1.0.0"},
)
client = Portkey(api_key=os.getenv("PORTKEY_API_KEY"))

@workflow(name="handle_request")
def handle_request(user_id: str, question: str):
    with propagate_attributes(
        customer_identifier=user_id,
        thread_identifier="conv_001",
        metadata={"plan": "pro"},
    ):
        response = client.chat.completions.create(
            model="gpt-4.1-nano",
            messages=[{"role": "user", "content": question}],
        )
        print(response.choices[0].message.content)
AttributeTypeDescription
customer_identifierstrIdentifies the end user in Respan analytics.
thread_identifierstrGroups related messages into a conversation.
metadatadictCustom key-value pairs. Merged with default metadata.

Decorators

Use @workflow and @task to create structured trace hierarchies.
from portkey_ai import Portkey
from respan import Respan, workflow, task
from openinference.instrumentation.portkey import PortkeyInstrumentor

respan = Respan(instrumentations=[PortkeyInstrumentor()])
client = Portkey(api_key=os.getenv("PORTKEY_API_KEY"))

@task(name="generate_outline")
def outline(topic: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4.1-nano",
        messages=[
            {"role": "user", "content": f"Create a brief outline about: {topic}"},
        ],
    )
    return response.choices[0].message.content

@workflow(name="content_pipeline")
def pipeline(topic: str):
    plan = outline(topic)
    response = client.chat.completions.create(
        model="gpt-4.1-nano",
        messages=[
            {"role": "user", "content": f"Write content from this outline: {plan}"},
        ],
    )
    print(response.choices[0].message.content)

pipeline("Benefits of API gateways")
respan.flush()

Examples

Basic chat

response = client.chat.completions.create(
    model="gpt-4.1-nano",
    messages=[{"role": "user", "content": "Say hello in three languages."}],
)
print(response.choices[0].message.content)