The Respan PostHog integration bridges LLM engineering metrics into your product analytics stack. Track AI feature adoption, usage patterns, and cost alongside the product metrics you already monitor in PostHog.
Once enabled, Respan sends events to PostHog for every LLM call - including model name, token usage, cost, latency, and any custom metadata you attach. Build PostHog dashboards that show AI feature usage next to conversion funnels and user behavior.
Use this integration to answer questions like: which user segments rely most on AI features? How does LLM latency correlate with user retention? Which AI features drive the most token spend?
Create a PostHog account and locate your API credentials from the project settings page.
Add the posthog_api_key and posthog_base_url parameters to your request bodies. Respan sends a PostHog event for each LLM call, tagged with model, cost, latency, token counts, and custom metadata.
Use the official Respan PostHog dashboard template to monitor LLM performance, or build custom dashboards using Respan event properties in PostHog.
python
# Add PostHog params to any Respan API request
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
extra_body={
"posthog_api_key": "phc_...",
"posthog_base_url": "https://app.posthog.com",
},
)
# Every LLM call now appears as a PostHog event
# with properties: model, cost, latency, tokens, metadata