AgentSpec

Trace AgentSpec agent workflows with Respan.
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.

1{
2 "mcpServers": {
3 "respan-docs": {
4 "url": "https://mcp.respan.ai/mcp/docs"
5 }
6 }
7}

What is AgentSpec?

Open Agent Specification (Agent Spec) (pyagentspec) is Oracle’s open specification and Python SDK for defining and running agentic systems. It provides a portable way to describe agents, models, and workflows across runtimes.

Setup

1

Install packages

$pip install pyagentspec[langgraph] respan-tracing respan-instrumentation-openinference openinference-instrumentation-agentspec python-dotenv
2

Set environment variables

$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
$export RESPAN_BASE_URL="https://api.respan.ai/api" # optional
3

Initialize and run

1import os
2from dotenv import load_dotenv
3from opentelemetry import trace
4from opentelemetry.sdk.trace import ReadableSpan
5from opentelemetry.sdk.trace.export import SpanProcessor
6from openinference.instrumentation.agentspec import AgentSpecInstrumentor
7from pyagentspec.adapters.langgraph import AgentSpecLoader
8from pyagentspec.agent import Agent
9from pyagentspec.llms import OpenAiConfig
10from respan_instrumentation_openinference._translator import OpenInferenceTranslator
11from respan_tracing import RespanTelemetry
12
13load_dotenv(override=True)
14
15RESPAN_API_KEY = os.getenv("RESPAN_API_KEY")
16RESPAN_BASE_URL = (os.getenv("RESPAN_BASE_URL") or "https://api.respan.ai/api").rstrip("/")
17
18# Route OpenAI-compatible AgentSpec calls through the Respan gateway.
19os.environ["OPENAI_API_KEY"] = RESPAN_API_KEY
20os.environ["OPENAI_BASE_URL"] = RESPAN_BASE_URL
21
22telemetry = RespanTelemetry(
23 app_name="agentspec-openinference-example",
24 api_key=RESPAN_API_KEY,
25 is_batching_enabled=False,
26 instruments=set(),
27)
28
29class TranslatingProcessor(SpanProcessor):
30 """Translate OpenInference attributes inline before export."""
31
32 def __init__(self, translator: OpenInferenceTranslator, inner: SpanProcessor):
33 self._translator = translator
34 self._inner = inner
35
36 def on_start(self, span, parent_context=None):
37 self._inner.on_start(span, parent_context)
38
39 def on_end(self, span: ReadableSpan):
40 self._translator.on_end(span)
41 self._inner.on_end(span)
42
43 def shutdown(self):
44 self._inner.shutdown()
45
46 def force_flush(self, timeout_millis: int = 30000):
47 return self._inner.force_flush(timeout_millis)
48
49# AgentSpec maintains its own tracing runtime, so wrap the active processors
50# directly instead of using OpenInferenceInstrumentor(...) or @workflow/@task.
51tp = trace.get_tracer_provider()
52asp = getattr(tp, "_active_span_processor", None)
53original_processors = list(getattr(asp, "_span_processors", ()))
54
55translator = OpenInferenceTranslator()
56asp._span_processors = tuple(
57 TranslatingProcessor(translator, proc) for proc in original_processors
58)
59
60instrumentor = AgentSpecInstrumentor()
61instrumentor.instrument(tracer_provider=tp)
62
63agent = Agent(
64 name="haiku_assistant",
65 description="A helpful assistant that writes haikus.",
66 llm_config=OpenAiConfig(
67 name="respan-openai",
68 model_id="gpt-4o-mini",
69 api_key=RESPAN_API_KEY,
70 ),
71 system_prompt="You are a helpful assistant. Respond only with a haiku.",
72)
73
74langgraph_agent = AgentSpecLoader().load_component(agent)
75
76try:
77 result = langgraph_agent.invoke(
78 input={
79 "messages": [
80 {"role": "user", "content": "Write a haiku about recursion in programming."}
81 ]
82 },
83 )
84 print(result["messages"][-1].content)
85finally:
86 telemetry.flush()
87 instrumentor.uninstrument()
4

View your trace

Open the Traces page to see your AgentSpec workflow with specification execution, tool calls, and LLM generations.

What gets traced

All AgentSpec operations are auto-instrumented:

  • Agent specification execution
  • Tool calls and results
  • LLM calls with model, tokens, and input/output
  • Workflow orchestration

Traces appear in the Traces dashboard.

Learn more