Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page
Add the Docs MCP to your AI coding tool to get help building with Respan. No API key needed.
{
  "mcpServers": {
    "respan-docs": {
      "url": "https://respan.ai/docs/mcp"
    }
  }
}
This integration is for the Respan gateway.

What is RubyLLM?

RubyLLM provides a unified Ruby interface for GPT, Claude, Gemini, and more. Since Respan is OpenAI-compatible, you can route all RubyLLM requests through the Respan gateway by pointing the OpenAI base URL to Respan.

Quickstart

Step 1: Get a Respan API key

Create an API key in the Respan dashboard.
Create a Respan API key

Step 2: Install RubyLLM

gem install ruby_llm
Or add it to your Gemfile:
gem "ruby_llm"

Step 3: Configure RubyLLM with Respan

RubyLLM.configure do |config|
  config.openai_api_key = ENV["RESPAN_API_KEY"]
  config.openai_api_base = "https://api.respan.ai/api"
end

Step 4: Make your first request

chat = RubyLLM.chat(model: "gpt-4o-mini")
response = chat.ask("Hello, world!")
puts response.content
All requests now go through the Respan gateway and are automatically logged.

Switch models

Since Respan supports 250+ models from all major providers, you can switch models by changing the model name. For OpenAI models, it works out of the box. For non-OpenAI models (Claude, Gemini, etc.), add provider: :openai and assume_model_exists: true to route them through the Respan gateway:
# OpenAI models — works directly
chat = RubyLLM.chat(model: "gpt-4o")

# Non-OpenAI models — add provider and assume_model_exists
chat = RubyLLM.chat(model: "claude-3-5-haiku-20241022", provider: :openai, assume_model_exists: true)
chat = RubyLLM.chat(model: "gemini-2.0-flash", provider: :openai, assume_model_exists: true)

response = chat.ask("Tell me about artificial intelligence")
puts response.content
For non-OpenAI models, provider: :openai doesn’t mean the model is from OpenAI — it tells RubyLLM to use the OpenAI API protocol to send the request. Without it, RubyLLM would try to call the provider (e.g. Anthropic) directly, bypassing Respan. assume_model_exists: true skips RubyLLM’s local model registry check.See the full model list for all available models.

Streaming

chat = RubyLLM.chat(model: "gpt-4o-mini")
chat.ask("Explain quantum computing") do |chunk|
  print chunk.content
end

Multi-tenancy with contexts

Use RubyLLM contexts to isolate per-tenant configuration:
tenant_ctx = RubyLLM.context do |config|
  config.openai_api_key = tenant.respan_api_key
  config.openai_api_base = "https://api.respan.ai/api"
end

chat = tenant_ctx.chat(model: "gpt-4o-mini")
response = chat.ask("Hello!")

Rails integration

RubyLLM works with Rails via acts_as_chat. Set your Respan config in an initializer:
# config/initializers/ruby_llm.rb
RubyLLM.configure do |config|
  config.openai_api_key = ENV["RESPAN_API_KEY"]
  config.openai_api_base = "https://api.respan.ai/api"
end
Then use acts_as_chat as normal — all LLM calls will be routed through Respan.

View your analytics

Access your Respan dashboard to see detailed analytics

Next Steps

User Management

Track user behavior and patterns

Prompt Management

Manage and version your prompts