API Documentation

Create Chat Completions

post

Creates a model response for the given chat conversation.

Authorizations
AuthorizationstringRequired
Bearer authentication header of the form Bearer <token>.
Body
modelstringRequired

ID of the model to use.

frequency_penaltyany ofOptional

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

numberOptional
or
nullOptional
logit_biasany ofOptional

Modify the likelihood of specified tokens appearing in the completion.

or
nullOptional
logprobsany ofOptional

Whether to return log probabilities of the output tokens or not.

booleanOptional
or
nullOptional
max_tokensany ofOptional

The maximum number of tokens that can be generated in the chat completion.

integerOptional
or
nullOptional
nany ofOptional

How many chat completion choices to generate for each input message.

integerOptional
or
nullOptional
presence_penaltyany ofOptional

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

numberOptional
or
nullOptional
seedany ofOptional

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

integerOptional
or
nullOptional
stopany ofOptional

Up to 4 sequences where the API will stop generating further tokens.

stringOptional
or
string[]Optional
or
nullOptional
streamany ofOptional

If set, partial message deltas will be sent.

booleanOptional
or
nullOptional
stream_optionsany ofOptional

Options for streaming response. Only set this when you set stream: true.

or
nullOptional
temperatureany ofOptional

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

numberOptional
or
nullOptional
top_pany ofOptional

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

numberOptional
or
nullOptional
userany ofOptional

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

stringOptional
or
nullOptional
messagesobject[]Required

A list of messages comprising the conversation so far.

function_callany ofOptionalDeprecated

Deprecated in favor of tool_choice. Controls which (if any) function is called by the model.

or
nullOptional
functionsany ofOptionalDeprecated

Deprecated in favor of tools. A list of functions the model may generate JSON inputs for.

object[]Optional
or
nullOptional
response_formatany ofOptional

An object specifying the format that the model must output.

or
nullOptional
tool_choiceany ofOptional

Controls which (if any) tool is called by the model.

string · enumOptionalPossible values:
or
objectOptional
or
nullOptional
toolsany ofOptional

A list of tools the model may call.

object[]Optional
or
nullOptional
top_logprobsany ofOptional

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

integerOptional
or
nullOptional
Responses
200

Successful Response

application/json
Responseany
post
/v1/chat/completions

No content

Create Completion

post

Creates a completion for the provided prompt and parameters.

Authorizations
AuthorizationstringRequired
Bearer authentication header of the form Bearer <token>.
Body
modelstringRequired

ID of the model to use.

frequency_penaltyany ofOptional

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

numberOptional
or
nullOptional
logit_biasany ofOptional

Modify the likelihood of specified tokens appearing in the completion.

or
nullOptional
logprobsany ofOptional

Whether to return log probabilities of the output tokens or not.

booleanOptional
or
nullOptional
max_tokensany ofOptional

The maximum number of tokens that can be generated in the chat completion.

integerOptional
or
nullOptional
nany ofOptional

How many chat completion choices to generate for each input message.

integerOptional
or
nullOptional
presence_penaltyany ofOptional

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

numberOptional
or
nullOptional
seedany ofOptional

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

integerOptional
or
nullOptional
stopany ofOptional

Up to 4 sequences where the API will stop generating further tokens.

stringOptional
or
string[]Optional
or
nullOptional
streamany ofOptional

If set, partial message deltas will be sent.

booleanOptional
or
nullOptional
stream_optionsany ofOptional

Options for streaming response. Only set this when you set stream: true.

or
nullOptional
temperatureany ofOptional

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

numberOptional
or
nullOptional
top_pany ofOptional

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

numberOptional
or
nullOptional
userany ofOptional

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

stringOptional
or
nullOptional
promptany ofRequired

The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.

stringOptional
or
string[]Optional
or
integer[]Optional
or
best_ofany ofOptional

Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed.

integerOptional
or
nullOptional
echoany ofOptional

Echo back the prompt in addition to the completion

booleanOptional
or
nullOptional
suffixany ofOptional

The suffix that comes after a completion of inserted text.

stringOptional
or
nullOptional
Responses
200

Successful Response

application/json
Responseany
post
/v1/completions

No content

Create Images

post

Creates an image given a prompt.

Authorizations
AuthorizationstringRequired
Bearer authentication header of the form Bearer <token>.
Body
modelstringRequired

ID of the model to use.

promptany ofRequired

A text description of the desired image(s).

stringOptional
or
string[]Optional
nany ofOptional

The number of images to generate. Must be between 1 and 10.

Default: 1
integerOptional
or
nullOptional
response_formatany ofOptional

The format in which the generated images are returned. Must be one of url or b64_json.

Default: url
stringOptional
or
nullOptional
sizestringOptional

The size of the generated images.

Default: 1024*1024
num_inference_stepsintegerOptional

The number of inference steps to run for each image.

Default: 20
guidance_scalenumberOptional

The scale of the guidance loss.

Default: 7.5
negative_promptany ofOptional

A negative prompt to help guide the model away from generating unwanted content.

stringOptional
or
string[]Optional
or
nullOptional
kwargsany ofOptional

Additional JSON properties to the request.

stringOptional
or
nullOptional
Responses
200

Successful Response

application/json
Responseany
post
/v1/images/generations

No content

Create Variations

post

Creates a variation of a given image.

Authorizations
AuthorizationstringRequired
Bearer authentication header of the form Bearer <token>.
Body
modelstringOptional

ID of the model to use.

imagestring · binaryRequired

The image to use as the basis for the variation(s).

promptany ofOptional

The prompt to use for generating the variation(s).

stringOptional
or
string[]Optional
or
nullOptional
negative_promptany ofOptional

The negative prompt to use for generating the variation(s).

stringOptional
or
string[]Optional
or
nullOptional
nintegerOptional

The number of variations to generate.

Default: 1
response_formatstringOptional

The format in which the generated images are returned. Must be one of url or b64_json.

Default: url
sizeany ofOptional

The size of the generated images.

stringOptional
or
nullOptional
kwargsany ofOptional

Additional JSON properties to the request.

stringOptional
or
nullOptional
Responses
200

Successful Response

application/json
Responseany
post
/v1/images/variations

No content

Create Inpainting

post

Creates an inpainted image given an image and a mask.

Authorizations
AuthorizationstringRequired
Bearer authentication header of the form Bearer <token>.
Body
modelstringOptional

ID of the model to use.

imagestring · binaryRequired

The image to inpaint.

mask_imagestring · binaryRequired

The mask image.

promptany ofOptional

The prompt to use for inpainting.

stringOptional
or
string[]Optional
or
nullOptional
negative_promptany ofOptional

The negative prompt to use for inpainting.

stringOptional
or
string[]Optional
or
nullOptional
nintegerOptional

The number of inpainted images to generate.

Default: 1
response_formatstringOptional

The format in which the generated images are returned.

Default: url
sizeany ofOptional

The size of the generated images.

stringOptional
or
nullOptional
num_inference_stepsintegerOptional

The number of inference steps to take.

Default: 20
kwargsany ofOptional

Additional JSON properties to the request.

stringOptional
or
nullOptional
Responses
200

Successful Response

application/json
Responseany
post
/v1/images/inpainting

No content

Create Transcriptions

post

Transcribes audio into the input language.

Authorizations
AuthorizationstringRequired
Bearer authentication header of the form Bearer <token>.
Body
modelstringOptional

ID of the model to use.

filestring · binaryRequired

The audio file object (not file name) to transcribe,

languageany ofOptional

The language of the input audio.

stringOptional
or
nullOptional
promptany ofOptional

An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.

stringOptional
or
nullOptional
response_formatstringOptional

The format of the transcript output

Default: json
kwargsany ofOptional

Additional JSON properties to the request.

stringOptional
or
nullOptional
Responses
200

Successful Response

application/json
Responseany
post
/v1/audio/transcriptions

No content

Create Speech

post

Generates audio from the input text.

Authorizations
AuthorizationstringRequired
Bearer authentication header of the form Bearer <token>.
Body
modelstringRequired

ID of the model to use.

inputstringRequired

The text to generate audio for.

voiceany ofOptional

The voice to use when generating the audio.

string · enumOptionalPossible values:
or
stringOptional
or
nullOptional
response_formatstringOptional

The format to audio in.

Default: mp3
speednumberOptional

The speed of the generated audio.

Default: 1
streambooleanOptional

Stream the response.

Default: false
kwargsany ofOptional

Additional JSON properties to the request.

stringOptional
or
nullOptional
prompt_speechstring · binaryOptional

The audio file to use as prompt.

Responses
200

Successful Response

application/json
Responseany
post
/v1/audio/speech

No content

Create Embedding

post

Creates an embedding vector representing the input text.

Authorizations
AuthorizationstringRequired
Bearer authentication header of the form Bearer <token>.
Body
modelstringRequired

ID of the model to use.

inputany ofRequired

The input to embed.

stringOptional
or
string[]Optional
userany ofOptional

A unique identifier representing your end-user

stringOptional
or
nullOptional
Responses
200

Successful Response

application/json
Responseany
post
/v1/embeddings

No content

Create Rerank

post

Reranks a list of documents based on a query.

Authorizations
AuthorizationstringRequired
Bearer authentication header of the form Bearer <token>.
Body
modelstringRequired

ID of the model to use.

querystringRequired

The query to rerank the documents by.

documentsstring[]Required

The list of documents to rerank.

top_nany ofOptional

The number of documents to return in the reranked list.

integerOptional
or
nullOptional
return_documentsany ofOptional

Whether to return the reranked documents or not.

Default: false
booleanOptional
or
nullOptional
max_chunks_per_docany ofOptional

The maximum number of chunks to use per document.

integerOptional
or
nullOptional
Responses
200

Successful Response

application/json
Responseany
post
/v1/rerank

No content

Last updated