Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page

Overview

CLI coding agents like Claude Code, Codex CLI, Gemini CLI, and OpenCode run in sandboxed terminal environments — making multiple LLM calls, reading files, executing shell commands, and editing code autonomously. Without observability, you have no visibility into what the agent did, how many tokens it consumed, or whether it made the right decisions. This cookbook shows how to add full tracing in 6 steps — no code changes required.

1. Install the CLI

npm install -g @respan/cli

2. Authenticate

respan auth login --api-key YOUR_API_KEY
Replace YOUR_API_KEY with the key from the API keys page. Verify you’re authenticated:
respan whoami

3. Integrate your agent

Run the integration command for your CLI agent:
respan integrate claude-code
This installs a Node.js hook at ~/.respan/hooks/claude-code.cjs and registers it in ~/.claude/settings.json. Every time Claude Code responds, the hook captures the turn and sends it to Respan.

4. Add attributes (optional)

Tag traces with customer IDs, project names, or custom metadata for filtering and cost tracking.

Via integrate flags

respan integrate claude-code \
  --customer-id "dev-alice" \
  --workflow-name "feature-work" \
  --project-id "proj_123" \
  --attrs '{"team": "backend", "sprint": "2025-Q3"}'

Via environment variables (per session)

Override attributes for a single session without re-running integrate:
export RESPAN_CUSTOMER_ID="dev-alice"
export RESPAN_METADATA='{"task_id": "JIRA-456", "branch": "feat/dark-mode"}'
For OpenCode, use the standard OTEL variable:
export OTEL_RESOURCE_ATTRIBUTES="team=backend,task_id=JIRA-456"

5. Use your agent as normal

No code changes required. Just run your CLI agent:
claude
Or non-interactively:
claude -p "Refactor the auth module to use JWT tokens"

6. View traces

Open Traces to see your agent activity. Each agent turn produces a trace like this:
claude-code (root)
  +-- claude.chat (LLM generation - model, tokens, input/output)
  +-- Thinking (extended reasoning, if present)
  +-- Tool: Bash (shell command)
  +-- Tool: Read (file read)
  +-- Tool: Write (file edit)
Each span includes:
  • Input/output — The user message and agent response
  • Model — Which model was used (e.g., claude-opus-4-6, gpt-5.4, gemini-3-flash)
  • Token usage — Prompt tokens, completion tokens, and reasoning tokens
  • Cost — Computed automatically from token counts
  • Latency — Time for each step
  • Tool details — File paths, shell commands, code edits
Filter traces by customer_identifier, workflow_name, or any custom metadata field to find specific sessions. Go to Users to see per-developer breakdowns of token usage, cost, and session count.

Configuration reference

Integrate flags

FlagDescription
--customer-idCustomer identifier for traces (e.g., developer name)
--workflow-nameWorkflow name for grouping traces
--span-nameRoot span name (defaults to agent name)
--project-idRespan project ID
--attrsCustom attributes JSON (e.g., '{"team":"backend"}')
--globalApply to all projects (not just current directory)
--localApply to current project only
--dry-runPreview changes without writing files

Environment variables

VariableAgentsDescription
RESPAN_API_KEYAllOverride the stored API key
RESPAN_CUSTOMER_IDClaude Code, Codex, GeminiOverride customer identifier per session
RESPAN_METADATAClaude Code, Codex, GeminiJSON string merged into span metadata
RESPAN_WORKFLOW_NAMEClaude Code, Codex, GeminiOverride workflow name per session
RESPAN_BASE_URLAllOverride API endpoint (for enterprise)
TRACE_TO_RESPANClaude CodeSet to false to disable tracing
CC_RESPAN_DEBUGClaude CodeEnable debug logging
CODEX_RESPAN_DEBUGCodex CLIEnable debug logging
GEMINI_RESPAN_DEBUGGemini CLIEnable debug logging

Debug logs

AgentLog file
Claude Code~/.claude/state/respan_hook.log
Codex CLI~/.codex/state/respan_hook.log
Gemini CLI~/.gemini/state/respan_hook.log

Disabling tracing

Set TRACE_TO_RESPAN to false in .claude/settings.local.json:
{
  "env": {
    "TRACE_TO_RESPAN": "false"
  }
}

Next steps

Monitor an AI agent

Set up evaluation and alerting for agent quality

Track cost per feature

Attribute LLM costs to teams and features