Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

What is Vercel AI?

The Vercel AI SDK is a TypeScript toolkit for building AI-powered applications with Next.js. This guide shows how to set up Respan with Vercel AI for both tracing and gateway routing.

Setup

1

Install packages

npm install @respan/exporter-vercel @vercel/otel
2

Set environment variables

Add your Respan credentials and your provider key to .env.local:
.env.local
OPENAI_API_KEY=your_openai_api_key_here
RESPAN_API_KEY=your_respan_api_key_here
RESPAN_BASE_URL=https://api.respan.ai
3

Set up OpenTelemetry instrumentation

Create instrumentation.ts in your project root (where package.json lives):
instrumentation.ts
import { registerOTel } from "@vercel/otel";
import { RespanExporter } from "@respan/exporter-vercel";

export function register() {
  registerOTel({
    serviceName: "next-app",
    traceExporter: new RespanExporter({
      apiKey: process.env.RESPAN_API_KEY,
      baseUrl: process.env.RESPAN_BASE_URL,
      debug: true,
    }),
  });
}
4

Enable telemetry in your route

In your API route (e.g. app/api/chat/route.ts), enable telemetry:
app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";

export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai("gpt-4o"),
    messages,
    experimental_telemetry: {
      isEnabled: true,
      metadata: {
        customer_identifier: "customer_from_metadata",
      },
    },
  });

  return result.toDataStreamResponse();
}
5

Run and verify

Start your dev server and make some chat requests:
pnpm dev
Open the Traces page to confirm requests are being traced.

Configuration

The RespanExporter constructor accepts:
ParameterRequiredDescription
apiKeyYesYour Respan API key.
baseUrlNoRespan API base URL. Defaults to https://api.respan.ai.
debugNoEnable debug logging. Defaults to false.

Attributes

Attach Respan-specific parameters to your traces via the experimental_telemetry option on any AI SDK call.

Via metadata

experimental_telemetry: {
  isEnabled: true,
  metadata: {
    customer_identifier: "user_123",
    thread_id: "conversation_456",
  },
}

Via header

Encode parameters as a base64 JSON header for full control:
experimental_telemetry: {
  isEnabled: true,
  headers: {
    "X-Data-Respan-Params": Buffer.from(
      JSON.stringify({
        customer_identifier: "user_123",
        thread_id: "conversation_456",
        metadata: { session: "abc" },
      })
    ).toString("base64"),
  },
}

Supported attributes

AttributeDescription
customer_identifierCustomer or user identifier
thread_idThread or conversation identifier
metadataCustom key-value pairs attached to the trace
prompt_unit_priceCustom input token price
completion_unit_priceCustom output token price

Gateway

Route LLM calls through the Respan gateway for automatic logging, fallbacks, and cost optimization. Override the baseURL in your provider SDK to point at Respan.

Compatibility

SDK helperWorks via Respan?Switch models?
@ai-sdk/openaiYesYes
@ai-sdk/anthropicYes (Anthropic models only)No
@ai-sdk/googleYesYes

Gateway examples

import { createOpenAI } from '@ai-sdk/openai'
import { streamText } from 'ai'

const client = createOpenAI({
  baseURL: 'https://api.respan.ai/api',
  apiKey: process.env.RESPAN_API_KEY,
  compatibility: 'strict',
})

const { textStream } = await streamText({
  model: client.chat('gpt-4o'),
  messages: [{ role: 'user', content: 'Hello!' }],
})

for await (const textPart of textStream) {
  console.log(textPart)
}
Using the Responses API:
import { createOpenAI } from "@ai-sdk/openai";
import { generateText, tool } from "ai";
import { z } from "zod";

const client = createOpenAI({
  baseURL: "https://api.respan.ai/api",
  apiKey: process.env.RESPAN_API_KEY,
});

const result = await generateText({
  model: client.responses('gpt-4o-mini'),
  prompt: 'What happened in San Francisco last week?',
  tools: {
    web_search_preview: client.tools.webSearchPreview({
      searchContextSize: 'high',
      userLocation: {
        type: 'approximate',
        city: 'San Francisco',
        region: 'California',
      },
    }),
  },
});

console.log(result.text);

Passing Respan parameters via gateway

To attach Respan parameters (like customer_identifier) when using the gateway, encode them as a base64 header:
const respanHeaderContent = {
  customer_identifier: "customer_123",
  // other params...
}
const encoded = Buffer.from(JSON.stringify(respanHeaderContent)).toString('base64');

const client = createOpenAI({
  baseURL: "https://api.respan.ai/api",
  apiKey: process.env.RESPAN_API_KEY,
  compatibility: "strict",
  headers: {
    "X-Data-Respan-Params": encoded,
  },
});

Observability

With this integration, Respan auto-captures:
  • AI model calls — requests made via the Vercel AI SDK
  • Token usage — input and output token counts
  • Performance metrics — latency and throughput
  • Errors — failed requests and error details
  • Custom metadata — additional context attached via telemetry metadata/headers
View traces on the Traces page.