Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

Set up tracing

To observe your AI agents, start by setting up tracing. Traces capture the full execution tree of your workflows, every agent step, tool call, and model request in a single view. We provide different ways to get started. See the agent tracing quickstart for a hands-on walkthrough:
  • Tracing SDK: install the SDK and add @workflow / @task decorators to your code. LLM calls are auto-captured. Works with any framework or custom code.
  • Framework integrations: if you’re already using OpenAI Agents SDK, Vercel AI SDK, Mastra, LangGraph, or other frameworks, use our pre-built exporters for zero-effort tracing.
  • Manual ingestion: send traces directly via the OTLP endpoint or JSON ingest API if you have existing telemetry pipelines.
Once tracing is set up, you can pass tracing parameters to enrich your spans:
  • customer_identifier: track per-user metrics, budgets, and rate limits
  • thread_identifier: group spans into conversation threads
  • trace_group_identifier: link related traces across sessions
  • metadata: attach any custom key-value pairs for filtering and tagging
If you don’t need full agent tracing and just want to log each LLM call or tool use separately, use the Logging API instead. It’s a single API call per request, great for simple setups or non-agentic workflows.

Monitor

Once data is flowing into Respan, your dashboard automatically shows requests, tokens, latency, cost, and error rates. You can drill into individual logs or traces to debug issues. To go further, you can set up:
  • Views: save reusable filter configurations you apply often (e.g. “production errors”, “high-cost requests”).
  • Alerts & notifications: get notified by email when issues are detected, like LLM outages or error spikes.
  • Automations: add guardrails to your system by running online evaluations on live traffic, flagging problematic responses, or triggering alerts when quality drops.