HuggingFace

Trace HuggingFace Transformers and Inference API calls with Respan.
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.

1{
2 "mcpServers": {
3 "respan-docs": {
4 "url": "https://docs.respan.ai/mcp"
5 }
6 }
7}

What is HuggingFace?

HuggingFace is the leading platform for machine learning models, datasets, and tools. The Transformers library provides access to thousands of pretrained models for text generation, classification, translation, and more.

Setup

1

Install packages

$pip install respan-ai opentelemetry-instrumentation-transformers transformers python-dotenv
2

Set environment variables

$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
$export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.respan.ai/api"
$export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer $RESPAN_API_KEY"
3

Initialize and run

1import os
2from dotenv import load_dotenv
3
4load_dotenv()
5
6from transformers import pipeline
7from respan import Respan
8
9# Auto-discover and activate all installed instrumentors
10respan = Respan(is_auto_instrument=True)
11
12# Create a text generation pipeline — auto-traced by Respan
13generator = pipeline("text-generation", model="gpt2")
14
15output = generator("Say hello in three languages:", max_new_tokens=100)
16print(output[0]["generated_text"])
17respan.flush()
4

View your trace

Open the Traces page to see your auto-instrumented inference spans.

What gets traced

  • Model name and repository
  • Input/output content
  • Token usage
  • Inference latency
  • Pipeline configuration

Traces appear in the Traces dashboard.

Learn more