Pipecat

Trace Pipecat real-time voice and multimodal AI pipelines with Respan.

  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.

1{
2 "mcpServers": {
3 "respan-docs": {
4 "url": "https://docs.respan.ai/mcp"
5 }
6 }
7}

What is Pipecat?

Pipecat is an open-source framework for building real-time, multimodal AI applications. It provides a pipeline architecture for voice agents, video processing, and other real-time AI experiences with support for speech-to-text, LLMs, and text-to-speech.

Setup

1

Install packages

$pip install respan-ai openinference-instrumentation-pipecat pipecat-ai
2

Set environment variables

$export RESPAN_API_KEY="YOUR_RESPAN_API_KEY"
$export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
3

Initialize and run

1import os
2import asyncio
3from dotenv import load_dotenv
4
5load_dotenv()
6
7from respan import Respan
8from respan_instrumentation_openinference import OpenInferenceInstrumentor
9from openinference_instrumentation_pipecat import PipecatInstrumentor
10from pipecat.pipeline.pipeline import Pipeline
11from pipecat.pipeline.runner import PipelineRunner
12from pipecat.pipeline.task import PipelineTask
13from pipecat.services.openai import OpenAILLMService
14from pipecat.transports.services.daily import DailyTransport
15
16# Initialize Respan with Pipecat instrumentation
17respan = Respan(
18 instrumentations=[
19 OpenInferenceInstrumentor(instrumentor=PipecatInstrumentor())
20 ]
21)
22
23async def main():
24 # Set up the LLM service
25 llm = OpenAILLMService(
26 api_key=os.getenv("OPENAI_API_KEY"),
27 model="gpt-4o-mini",
28 )
29
30 # Build the pipeline
31 pipeline = Pipeline([
32 llm,
33 ])
34
35 # Create and run the pipeline task
36 task = PipelineTask(pipeline)
37 runner = PipelineRunner()
38 await runner.run(task)
39
40 respan.flush()
41
42asyncio.run(main())
4

View your trace

Open the Traces page to see your Pipecat pipeline with frame processing, LLM calls, and real-time latency metrics.

What gets traced

All Pipecat operations are auto-instrumented:

  • Pipeline frame processing
  • Speech-to-text transcription
  • LLM calls with model, tokens, and input/output
  • Text-to-speech synthesis
  • Real-time processing latency

Traces appear in the Traces dashboard.

Learn more