Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://respan.ai/docs/mcp"
    }
  }
}

What is AI SDK?

The AI SDK (by Vercel) is a TypeScript toolkit for building AI-powered applications with Next.js, React, and other frameworks. It provides unified APIs for text generation, streaming, tool use, and structured outputs across multiple LLM providers.

Setup

1

Install packages

npm install @respan/exporter-vercel @vercel/otel ai @ai-sdk/openai
2

Set environment variables

export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Create instrumentation file

Create instrumentation.ts in your project root (same level as package.json):
instrumentation.ts
import { registerOTel } from '@vercel/otel';
import { RespanExporter } from '@respan/exporter-vercel';

export function register() {
  registerOTel({
    serviceName: 'my-app',
    traceExporter: new RespanExporter({
      apiKey: process.env.RESPAN_API_KEY!,
    }),
  });
}
4

Enable telemetry in AI calls

Add experimental_telemetry to every AI SDK call. Use functionId to name each span and metadata to attach custom properties:
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const result = await generateText({
    model: openai('gpt-5-mini'),
    prompt,
    experimental_telemetry: {
      isEnabled: true,
      functionId: 'generate_text',
      metadata: {
        customer_identifier: 'user-123',
        environment: 'production',
      },
    },
  });

  return Response.json({ text: result.text });
}
5

View your trace

Open the Traces page to see your AI calls with full input/output, token usage, and cost.

Configuration

ParameterTypeDefaultDescription
apiKeystringRESPAN_API_KEY env varRespan API key.
baseUrlstring"https://api.respan.ai"API base URL.
debugbooleanfalseEnable debug logging for troubleshooting.
See the Vercel Exporter SDK reference for the full API.
Next.js streaming routes: If your AI calls use streaming (streamText), set maxDuration in your route handler to avoid Vercel’s default timeout:
export const maxDuration = 30;

Examples

Basic generation

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-5-mini'),
  prompt: 'Tell me a joke about AI',
  experimental_telemetry: {
    isEnabled: true,
    functionId: 'chat',
    metadata: { customer_identifier: 'user-123' },
  },
});

Multi-step pipeline with metadata

Each generateText / streamText call creates a traced span. All spans within the same request are grouped into a single trace:
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(req: Request) {
  const { userId } = await req.json();
  const input = 'Tell me a joke';

  const intentResult = await generateText({
    model: openai('gpt-5-mini'),
    prompt: `Classify this intent in one word: "${input}"`,
    experimental_telemetry: {
      isEnabled: true,
      functionId: 'classify_intent',
      metadata: {
        customer_identifier: userId,
        thread_identifier: 'session-abc-123',
        environment: 'production',
        step: 'classification',
        workflow: 'joke_pipeline',
      },
    },
  });

  const responseResult = await generateText({
    model: openai('gpt-5-mini'),
    prompt: `The user intent is "${intentResult.text}". Tell me a short joke.`,
    experimental_telemetry: {
      isEnabled: true,
      functionId: 'generate_response',
      metadata: {
        customer_identifier: userId,
        thread_identifier: 'session-abc-123',
        environment: 'production',
        step: 'generation',
        workflow: 'joke_pipeline',
        intent: intentResult.text,
      },
    },
  });

  const summaryResult = await generateText({
    model: openai('gpt-5-mini'),
    prompt: `Summarize this in one sentence: "${responseResult.text}"`,
    experimental_telemetry: {
      isEnabled: true,
      functionId: 'summarize',
      metadata: {
        customer_identifier: userId,
        thread_identifier: 'session-abc-123',
        environment: 'production',
        step: 'summarization',
        workflow: 'joke_pipeline',
      },
    },
  });

  return Response.json({
    intent: intentResult.text,
    response: responseResult.text,
    summary: summaryResult.text,
  });
}

Streaming with tools

import { streamText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai('gpt-5-mini'),
    messages,
    tools: {
      getWeather: tool({
        description: 'Get weather for a city',
        parameters: z.object({ city: z.string() }),
        execute: async ({ city }) => `${city}: sunny, 72°F`,
      }),
    },
    maxSteps: 5,
    experimental_telemetry: {
      isEnabled: true,
      functionId: 'stream_with_tools',
    },
  });

  return result.toTextStreamResponse();
}

Attributes

Pass metadata and customer identifiers through the experimental_telemetry.metadata object. The exporter maps these to Respan fields automatically:
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-5-mini'),
  prompt: 'Hello',
  experimental_telemetry: {
    isEnabled: true,
    functionId: 'chat',
    metadata: {
      customer_identifier: 'user-123',
      customer_name: 'John Doe',
      customer_email: 'john@example.com',
      thread_identifier: 'thread-abc',
      environment: 'production',
      workflow: 'onboarding',
    },
  },
});
AttributeDescription
customer_identifierUser/customer identifier for filtering in the dashboard.
customer_nameCustomer display name.
customer_emailCustomer email.
thread_identifierConversation thread ID.
Any custom keyAdded to the span’s metadata object in Respan.

Troubleshooting

The @vercel/otel package may have broken peer dependencies. If you see missing module errors on startup, install them directly:
npm install @opentelemetry/api-logs
If dependency conflicts persist, use yarn which is more lenient with peer dependencies:
package.json
{
  "packageManager": "yarn@4.9.2"
}
  1. Verify experimental_telemetry: { isEnabled: true } is set on every AI SDK call
  2. Check that instrumentation.ts is in your project root (same level as package.json)
  3. Ensure RESPAN_API_KEY is set in your environment
  4. Enable debug mode to see export logs:
new RespanExporter({
  apiKey: process.env.RESPAN_API_KEY!,
  debug: true,
})
If you see “Next.js inferred your workspace root, but it may not be correct”, set the root explicitly in next.config.js:
next.config.js
const nextConfig = {
  turbopack: {
    root: __dirname,
  },
};

module.exports = nextConfig;
Looking for gateway integration? See Gateway > Vercel AI SDK.