The Respan gateway provides full OpenAI SDK compatibility, letting you access 250+ models from OpenAI, Anthropic, Google, and more through a single API key. Change your base URL and API key - everything else stays the same.
Every request is automatically logged with token usage, cost, latency, and model information. The gateway also provides fallback routing, load balancing, and rate limit handling out of the box.
Pass custom parameters like customer_identifier, fallback_models, and metadata through the extra_body parameter to enable user tracking, automatic failover, and custom tagging.
Point your OpenAI client to the Respan gateway by setting base_url to https://api.keywordsai.co/api/ and using your Respan API key. No other code changes needed.
The gateway proxies requests to the underlying provider, logs everything, and returns the response in the standard OpenAI format. Switch between models like gpt-4o, claude-3-5-sonnet, and gemini-1.5-pro by changing the model parameter.
Use extra_body to pass Respan-specific parameters like customer_identifier for user tracking, fallback_models for automatic failover, and metadata for custom tagging.
python
from openai import OpenAI
client = OpenAI(
base_url="https://api.keywordsai.co/api/",
api_key="YOUR_RESPAN_API_KEY",
)
# Use any model - OpenAI, Anthropic, Google, and more
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, world!"}],
extra_body={
"customer_identifier": "user-123",
"fallback_models": ["claude-3-5-sonnet-20241022"],
},
)