Respan integrates with the Vercel AI SDK in two ways: route LLM requests through the gateway for automatic logging and fallbacks, or use the OpenTelemetry tracing exporter for detailed performance monitoring.
The gateway integration supports OpenAI, Anthropic, and Google Gemini providers. Point your provider client to the Respan base URL and all requests are automatically logged with token usage, cost, and latency.
The tracing integration uses the @keywordsai/exporter-vercel package with OpenTelemetry. Enable experimental_telemetry in your streamText calls to capture model invocations, message exchanges, and custom metadata.
For gateway mode: configure your provider client (OpenAI, Anthropic, or Google) with the Respan base URL and API key. Requests are proxied through the gateway and logged automatically.
For tracing mode: install @keywordsai/exporter-vercel and create an instrumentation.ts file with the Respan OpenTelemetry exporter. Set experimental_telemetry.isEnabled to true in your streamText configuration.
Both modes capture token usage, latency, and cost. The tracing mode additionally captures custom metadata like customer identifiers and pricing parameters.
typescript
// Gateway mode - route through Respan
import { createOpenAI } from "@ai-sdk/openai";
const client = createOpenAI({
baseURL: "https://api.keywordsai.co",
apiKey: "YOUR_RESPAN_API_KEY",
compatibility: "strict",
});
// Tracing mode - OpenTelemetry exporter
// instrumentation.ts
import { registerOTel } from "@vercel/otel";
import { KeywordsAIExporter } from "@keywordsai/exporter-vercel";
registerOTel({
serviceName: "my-app",
traceExporter: new KeywordsAIExporter({
apiKey: process.env.KEYWORDSAI_API_KEY,
}),
});