IBM Watsonx

Trace IBM Watsonx AI model calls with Respan.
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.

1{
2 "mcpServers": {
3 "respan-docs": {
4 "url": "https://docs.respan.ai/mcp"
5 }
6 }
7}

What is IBM Watsonx?

IBM Watsonx is IBM’s enterprise AI platform that provides foundation models, generative AI, and machine learning tools. It offers models like Granite and supports text generation, summarization, and classification for enterprise workloads.

Setup

1

Install packages

$pip install respan-ai opentelemetry-instrumentation-watsonx ibm-watsonx-ai python-dotenv
2

Set environment variables

$export WATSONX_API_KEY="YOUR_WATSONX_API_KEY"
$export WATSONX_PROJECT_ID="YOUR_WATSONX_PROJECT_ID"
$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
$export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.respan.ai/api"
$export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer $RESPAN_API_KEY"
3

Initialize and run

1import os
2from dotenv import load_dotenv
3
4load_dotenv()
5
6from ibm_watsonx_ai.foundation_models import Model
7from ibm_watsonx_ai.metanames import GenTextParamsMetaNames
8from respan import Respan
9
10# Auto-discover and activate all installed instrumentors
11respan = Respan(is_auto_instrument=True)
12
13# Configure model parameters
14params = {
15 GenTextParamsMetaNames.MAX_NEW_TOKENS: 100,
16 GenTextParamsMetaNames.TEMPERATURE: 0.7,
17}
18
19# Initialize the Watsonx model — calls are auto-traced by Respan
20model = Model(
21 model_id="ibm/granite-13b-instruct-v2",
22 credentials={
23 "apikey": os.getenv("WATSONX_API_KEY"),
24 "url": "https://us-south.ml.cloud.ibm.com",
25 },
26 project_id=os.getenv("WATSONX_PROJECT_ID"),
27 params=params,
28)
29
30response = model.generate_text("Say hello in three languages.")
31print(response)
32respan.flush()
4

View your trace

Open the Traces page to see your auto-instrumented LLM spans.

What gets traced

  • Model name and provider
  • Prompt and completion tokens
  • Input/output content
  • Response latency
  • Decoding parameters

Traces appear in the Traces dashboard.

Learn more