Respan provides tracing capabilities for Instructor, enabling monitoring and analysis of structured LLM outputs through async workflows. Just 3 lines of setup, then your structured outputs are automatically traced.
The integration uses the @task decorator to wrap async functions that call Instructor. Define Pydantic models with field descriptions for structured output validation, and all tracing happens automatically.
Install instructor, openai, and keywordsai-tracing. Initialize KeywordsAITelemetry with your app name and desired instruments (OpenAI).
Set up an AsyncOpenAI client wrapped with instructor.from_openai(). Apply the @task decorator to async functions and run workflows normally - tracing happens in the background.
python
import instructor
from openai import AsyncOpenAI
from keywordsai_tracing import KeywordsAITelemetry, task
from pydantic import BaseModel
telemetry = KeywordsAITelemetry(app_name="my-app", instruments=["openai"])
client = instructor.from_openai(AsyncOpenAI())
class UserInfo(BaseModel):
name: str
age: int
email: str
@task(name="extract_user")
async def extract(text: str) -> UserInfo:
return await client.chat.completions.create(
model="gpt-4o-mini",
response_model=UserInfo,
messages=[{"role": "user", "content": text}],
)