Set up Respan
Set up Respan
- Sign up — Create an account at platform.respan.ai
- Create an API key — Generate one on the API keys page
- Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Set up tracing
To observe your AI agents, start by setting up tracing. Traces capture the full execution tree of your workflows, every agent step, tool call, and model request in a single view. We provide different ways to get started. See the agent tracing quickstart for a hands-on walkthrough:- Tracing SDK: install the SDK and add
@workflow/@taskdecorators to your code. LLM calls are auto-captured. Works with any framework or custom code. - Framework integrations: if you’re already using OpenAI Agents SDK, Vercel AI SDK, Mastra, LangGraph, or other frameworks, use our pre-built exporters for zero-effort tracing.
- Manual ingestion: send traces directly via the OTLP endpoint or JSON ingest API if you have existing telemetry pipelines.
customer_identifier: track per-user metrics, budgets, and rate limitsthread_identifier: group spans into conversation threadstrace_group_identifier: link related traces across sessionsmetadata: attach any custom key-value pairs for filtering and tagging
Monitor
Once data is flowing into Respan, your dashboard automatically shows requests, tokens, latency, cost, and error rates. You can drill into individual logs or traces to debug issues. To go further, you can set up:- Views: save reusable filter configurations you apply often (e.g. “production errors”, “high-cost requests”).
- Alerts & notifications: get notified by email when issues are detected, like LLM outages or error spikes.
- Automations: add guardrails to your system by running online evaluations on live traffic, flagging problematic responses, or triggering alerts when quality drops.