Update prompt version

You can update a prompt version by sending a PATCH request to the prompt version endpoint. The request should include the name of the prompt version, and the list of fallback models, load balance models, tools, and whether to deploy this version as the live version. - `messages` *array* **required**: The list of messages for the prompt version. If you want to add a variable, you can use the following format: `{{variable_name}}`. **Example** ```json { "messages": [ {"role": "system", "content": "You are a helpful {{role}}."}, // role could be assistant, teacher, etc. {"role": "user", "content": "Hello, how are you?"} ], } ``` - `model` *string* **required**: Speciy the model you want to use in this version. - `description` *string*: Description of the prompt version - `stream` *boolean*: Whether the prompt version should be streamed or not. Default is `false`. - `temperature` *float*: The temperature of the model. - `max_tokens` *integer*: The maximum number of tokens to generate. - `top_p` *float*: The nucleus sampling probability. - `frequency_penalty` *float*: Specify how much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood of repeating the same line verbatim - `presence_penalty` *float*: Specify how much to penalize new tokens based on whether they appear in the text so far. Increases the model’s likelihood of talking about new topics - `variables` *object*: The list of variables for the prompt version. You can use these variables in the messages. **Example** ```json { "variables": { "role": ["assistant", "teacher", "student"] } } ``` - `fallback_models` *array*: The list of fallback models for the prompt version. Check out [fallback models](/documentation/features/gateway/advanced-configuration#fallback-models) for more information. **Example** ```json { "fallback_models": ["gpt-4o", "gpt-4o-mini"] } ``` - `load_balance_models` *array*: The list of models to load balance the prompt version. Check out [load balancing](/documentation/features/gateway/advanced-configuration#load-balancing) for more information. **Example** ```json { "load_balance_models": [ {"model": "gpt-4o", "weight": 0.7}, {"model": "gpt-4o-mini", "weight": 0.3} ], } ``` - `tools` *array*: The list of tools to use for the prompt version. Check out [tools](/documentation/features/gateway/advanced-configuration#function-calling) for more information. **Example** ```json { "tools": [ { "type": "function", "function": { "name": "get_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } } ], } ``` - `deploy` *boolean*: Whether to deploy this version as the live version. Default is `false`. A newer version must exist to deploy a previous version. For example, to deploy v2, v3 must already exist. ```Python Python url = "https://api.respan.ai/api/prompts/{prompt_id}/versions/{version}/" api_key = "YOUR_RESPAN_API_KEY" # Replace with your actual Respan API key data = { "description": "A description of the prompt version", "messages": [ {"role": "system", "content": "You are a helpful {{role}}."}, {"role": "user", "content": "Hello, how are you?"} ], "model": "gpt-3.5-turbo", "stream": false, "temperature": 0.7, "max_tokens": 256, "top_p": 1.0, "frequency_penalty": 0.0, "presence_penalty": 0.0, "variables": {}, "fallback_models": ["gpt-3.5-turbo-16k", "gpt-4"], "load_balance_models": [ {"model": "gpt-3.5-turbo", "weight": 0.7}, {"model": "gpt-4", "weight": 0.3} ], } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.patch(url, headers=headers, json=data) print(response.json()) ``` ```TypeScript TypeScript // Define the function with TypeScript fetch('https://api.respan.ai/api/prompts/{prompt_id}/versions/{version}/', { method: 'PATCH', headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer YOUR_RESPAN_API_KEY' }, body: JSON.stringify({ description: "A description of the prompt version", messages: [ {role: "system", content: "You are a helpful {{role}}."}, {role: "user", content: "Hello, how are you?"} ], model: "gpt-3.5-turbo", stream: false, temperature: 0.7, max_tokens: 256, top_p: 1.0, frequency_penalty: 0.0, presence_penalty: 0.0, variables: {}, fallback_models: ["gpt-3.5-turbo-16k", "gpt-4"], load_balance_models: [ {model: "gpt-3.5-turbo", weight: 0.7}, {model: "gpt-4", weight: 0.3} ], }) }) .then(response => response.json()) .then(data => console.log(data)); ```

Authentication

AuthorizationBearer
API key authentication. Get your API key from https://platform.respan.ai/platform/api-keys

Path parameters

prompt_idstringRequired
Prompt Id
versionstringRequired
Version

Request

This endpoint expects an object.

Response

Successful response for Update prompt version
variablesobject

Errors

401
Unauthorized Error