Automatic Instrumentation

Auto-instrument popular libraries like OpenAI and Anthropic

Overview

The Respan Tracing SDK can automatically instrument popular LLM libraries, capturing all API calls without manual tracing code.

Supported Libraries

LibraryPackageStatus
OpenAIopenai✅ Supported
Anthropic@anthropic-ai/sdk✅ Supported

Setup

OpenAI Instrumentation

1import OpenAI from 'openai';
2import { RespanTelemetry } from '@respan/tracing';
3
4const respanAi = new RespanTelemetry({
5 apiKey: process.env.RESPAN_API_KEY,
6 appName: 'my-app',
7 instrumentModules: {
8 openAI: OpenAI, // Pass the OpenAI class
9 }
10});
11
12await respanAi.initialize();
13
14const openai = new OpenAI({
15 apiKey: process.env.OPENAI_API_KEY
16});
17
18// All OpenAI calls are automatically traced
19await respanAi.withWorkflow(
20 { name: 'ai_chat' },
21 async () => {
22 const completion = await openai.chat.completions.create({
23 model: 'gpt-4',
24 messages: [
25 { role: 'system', content: 'You are a helpful assistant.' },
26 { role: 'user', content: 'Hello!' }
27 ],
28 });
29
30 console.log(completion.choices[0].message.content);
31 }
32);

Anthropic Instrumentation

1import Anthropic from '@anthropic-ai/sdk';
2import { RespanTelemetry } from '@respan/tracing';
3
4const respanAi = new RespanTelemetry({
5 apiKey: process.env.RESPAN_API_KEY,
6 appName: 'my-app',
7 instrumentModules: {
8 anthropic: Anthropic, // Pass the Anthropic class
9 }
10});
11
12await respanAi.initialize();
13
14const anthropic = new Anthropic({
15 apiKey: process.env.ANTHROPIC_API_KEY
16});
17
18// All Anthropic calls are automatically traced
19await respanAi.withWorkflow(
20 { name: 'ai_chat' },
21 async () => {
22 const message = await anthropic.messages.create({
23 model: 'claude-3-haiku-20240307',
24 max_tokens: 1024,
25 messages: [
26 { role: 'user', content: 'Hello!' }
27 ],
28 });
29
30 console.log(message.content);
31 }
32);

Multi-Provider Instrumentation

1import OpenAI from 'openai';
2import Anthropic from '@anthropic-ai/sdk';
3import { RespanTelemetry } from '@respan/tracing';
4
5const respanAi = new RespanTelemetry({
6 apiKey: process.env.RESPAN_API_KEY,
7 appName: 'multi-provider-app',
8 instrumentModules: {
9 openAI: OpenAI,
10 anthropic: Anthropic,
11 }
12});
13
14await respanAi.initialize();
15
16const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
17const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
18
19await respanAi.withWorkflow(
20 { name: 'multi_provider_comparison' },
21 async () => {
22 // Both providers are automatically traced
23 const openaiResponse = await openai.chat.completions.create({
24 model: 'gpt-3.5-turbo',
25 messages: [{ role: 'user', content: 'Hello!' }]
26 });
27
28 const anthropicResponse = await anthropic.messages.create({
29 model: 'claude-3-haiku-20240307',
30 max_tokens: 100,
31 messages: [{ role: 'user', content: 'Hello!' }]
32 });
33
34 return { openaiResponse, anthropicResponse };
35 }
36);

What Gets Traced

OpenAI

  • Chat Completions: openai.chat.completions.create()
  • Streaming: openai.chat.completions.create({ stream: true })
  • Embeddings: openai.embeddings.create()
  • Images: openai.images.generate()

Captured data:

  • Model name
  • Messages/prompts
  • Response content
  • Token usage
  • Latency
  • Errors

Anthropic

  • Messages: anthropic.messages.create()
  • Streaming: anthropic.messages.create({ stream: true })

Captured data:

  • Model name
  • Messages
  • Response content
  • Token usage
  • Latency
  • Errors

Configuration Options

Disable Specific Instrumentation

1const respanAi = new RespanTelemetry({
2 apiKey: process.env.RESPAN_API_KEY,
3 appName: 'my-app',
4 instrumentModules: {
5 openAI: OpenAI,
6 // anthropic: Anthropic, // Commented out to disable
7 }
8});

No Instrumentation

1const respanAi = new RespanTelemetry({
2 apiKey: process.env.RESPAN_API_KEY,
3 appName: 'my-app',
4 // Don't pass instrumentModules for manual tracing only
5});

Manual Tracing with Auto-Instrumentation

You can combine auto-instrumentation with manual tracing:

1import OpenAI from 'openai';
2import { RespanTelemetry } from '@respan/tracing';
3
4const respanAi = new RespanTelemetry({
5 apiKey: process.env.RESPAN_API_KEY,
6 appName: 'my-app',
7 instrumentModules: { openAI: OpenAI }
8});
9
10await respanAi.initialize();
11
12const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
13
14await respanAi.withWorkflow(
15 { name: 'research_workflow' },
16 async () => {
17 // Manual task
18 const query = await respanAi.withTask(
19 { name: 'prepare_query' },
20 async () => {
21 return 'What is quantum computing?';
22 }
23 );
24
25 // Auto-instrumented OpenAI call
26 const completion = await openai.chat.completions.create({
27 model: 'gpt-4',
28 messages: [{ role: 'user', content: query }]
29 });
30
31 // Manual task
32 return await respanAi.withTask(
33 { name: 'process_response' },
34 async () => {
35 return completion.choices[0].message.content;
36 }
37 );
38 }
39);

Streaming Support

Auto-instrumentation works with streaming:

1await respanAi.withWorkflow(
2 { name: 'streaming_chat' },
3 async () => {
4 const stream = await openai.chat.completions.create({
5 model: 'gpt-4',
6 messages: [{ role: 'user', content: 'Tell me a story' }],
7 stream: true,
8 });
9
10 for await (const chunk of stream) {
11 process.stdout.write(chunk.choices[0]?.delta?.content || '');
12 }
13
14 // Full stream is traced including all chunks
15 }
16);

Error Tracking

Auto-instrumentation captures errors:

1await respanAi.withWorkflow(
2 { name: 'error_handling' },
3 async () => {
4 try {
5 await openai.chat.completions.create({
6 model: 'invalid-model',
7 messages: [{ role: 'user', content: 'Hello' }]
8 });
9 } catch (error) {
10 // Error is automatically recorded in the trace
11 console.error('OpenAI error:', error);
12 }
13 }
14);

Best Practices

  • Always pass the library class (not an instance) to instrumentModules
  • Initialize auto-instrumentation before creating SDK instances
  • Combine auto-instrumentation with manual tracing for complete visibility
  • Auto-instrumentation captures all SDK calls within traced contexts
  • Use manual tracing for business logic around LLM calls
  • Auto-instrumentation has minimal performance overhead

Troubleshooting

Instrumentation Not Working

Ensure you:

  1. Pass the class to instrumentModules (e.g., OpenAI, not openai)
  2. Call initialize() before creating SDK instances
  3. Wrap calls in withWorkflow, withTask, withAgent, or withTool
  4. Use the latest version of the Respan Tracing SDK

Example Debug

1const respanAi = new RespanTelemetry({
2 apiKey: process.env.RESPAN_API_KEY,
3 appName: 'debug-app',
4 instrumentModules: { openAI: OpenAI },
5 logLevel: 'debug' // Enable debug logging
6});
7
8await respanAi.initialize();
9
10// Check if instrumentation is active
11const client = respanAi.getClient();
12console.log('Recording:', client.isRecording());

Future Support

Additional libraries will be supported in future versions. Check the documentation for updates.