Skip to main content
  1. Sign up — Create an account at platform.respan.ai
  2. Create an API key — Generate one on the API keys page
  3. Add credits or a provider key — Add credits on the Credits page or connect your own provider key on the Integrations page

What is prompt management?

Prompt management lets you create, version, and deploy prompt templates centrally — instead of hardcoding prompts in your application, reference them by ID.
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {
            "role": "system",
            "content": "You are a helpful customer support agent for TechCorp."
        },
        {
            "role": "user",
            "content": f"Customer {customer_name} is asking about {issue_type}"
        }
    ]
)

Prerequisites

Before you begin, make sure you have:
  1. Respan API key — get one from the API keys page. See API keys for details.
  2. LLM provider key — add your provider credentials (e.g. OpenAI) on the Providers page. See LLM provider keys for details.

Create your first prompt

1

Create a new prompt

Go to the Prompts page and click Create new prompt. Name your prompt and add a description.Create new prompt
2

Configure the prompt

In the Editor tab, set parameters like model, temperature, max tokens, and top P in the right sidebar.Configure prompt
3

Write content with variables

Click + Add message to add messages. Use {{variable_name}} for dynamic content — see Variables for Jinja templates, JSON inputs, and more.
Please develop an optimized Python function to {{task_description}},
utilizing {{specific_library}}, include error handling, and write unit tests.
Variable names must use underscores: {{task_description}} not {{task description}}.
Prompt with variables
4

Test and commit

  1. Add values for each variable in the Variables tab.
  2. Click Run to test.
  3. Click Commit and write a commit message to save this version.
Avoid “Commit + deploy” unless you want changes to go live immediately.
Commit prompt
5

Deploy to production

Go to the Deployments tab and click Deploy. See Deployment & versioning for version pinning, rollbacks, and overrides.
Deploying immediately affects production. All API calls using this prompt will use the new version right away.
Deploy prompt

Use your prompt in code

Find the Prompt ID in the Overview panel on the Prompts page.
Find Prompt ID
Then call it from your application using prompt schema v2 (recommended):
OpenAI SDKs strip v2 fields like schema_version and patch. Prompt schema v2 requires raw HTTP requests.
import requests

headers = {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer YOUR_RESPAN_API_KEY',
}

data = {
    'prompt': {
        'prompt_id': 'YOUR_PROMPT_ID',
        'schema_version': 2,
        'variables': {
            'task_description': 'Square a number',
            'specific_library': 'math'
        }
    }
}

response = requests.post(
    'https://api.respan.ai/api/chat/completions',
    headers=headers,
    json=data
)
print(response.json())
You don’t need model and messages — the prompt configuration is used automatically.
With v1, use override: true to let the prompt config win over request-body parameters. This is the default when schema_version is omitted.
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "placeholder"}],
    extra_body={
        "prompt": {
            "prompt_id": "YOUR_PROMPT_ID",
            "variables": {
                "task_description": "Square a number",
                "specific_library": "math"
            },
            "override": True
        }
    }
)
See Prompt merge modes (v1 vs v2) for full details.

Monitor your prompts

Filter logs by prompt name on the Logs page to track usage, response times, and token consumption. See Prompt logging for logging setup.
Monitor prompts

Next steps