Create chat completion
This function processes chat completion requests by determining whether to use streaming or non-streaming response handling based on the request payload. For streaming requests, it configures additional options to track token usage.
Returns
Returns a Response containing either:
- A streaming SSE connection for real-time completions
- A single JSON response for non-streaming completions
Errors
Returns an error status code if:
- The request processing fails
- The streaming/non-streaming handlers encounter errors
- The underlying inference service returns an error
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Body
Represents the create chat completion request.
This is used to represent the create chat completion request in the chat completion request. It can be either a chat completion or a chat completion stream.
A list of messages comprising the conversation so far
A message that is part of a conversation which is based on the role of the author of the message.
This is used to represent the message in the chat completion request. It can be either a system message, a user message, an assistant message, or a tool message.
[
{
"role": "system",
"content": "You are a helpful AI assistant"
},
{ "role": "user", "content": "Hello!" },
{
"role": "assistant",
"content": "I'm here to help you with any questions you have. How can I assist you today?"
}
]
ID of the model to use
"meta-llama/Llama-3.3-70B-Instruct"
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far
0
Controls how the model responds to function calls
A list of functions the model may generate JSON inputs for
[
{
"name": "get_current_weather",
"description": "Get the current weather in a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location to get the weather for"
}
},
"required": ["location"]
}
}
]
Modify the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
{ "1234567890": 0.5, "1234567891": -0.5 }
The maximum number of tokens to generate in the chat completion
4096
The maximum number of tokens to generate in the chat completion
4096
How many chat completion choices to generate for each input message
1
Whether to enable parallel tool calls.
true
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far
0
The format to return the response in
If specified, our system will make a best effort to sample deterministically
123
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
If set to 'auto', and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted. If set to 'auto', and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee. If set to 'default', the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee. When not set, the default behavior is 'auto'.
"auto"
Up to 4 sequences where the API will stop generating further tokens
"json([\"stop\", \"halt\"])"
Whether to stream back partial progress. Must be false for this request type.
false
Options for streaming response. Only set this when you set stream: true.
What sampling temperature to use, between 0 and 2
0.7
Controls which (if any) tool the model should use
none
, auto
A list of tools the model may call
A tool that can be used in a chat completion.
[
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location to get the weather for"
}
},
"required": ["location"]
}
}
}
]
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
1
An alternative to sampling with temperature
1
A unique identifier representing your end-user
"user-1234"
Response
Represents the chat completion response.
This is used to represent the chat completion response in the chat completion request. It can be either a chat completion or a chat completion stream.
A list of chat completion choices.
Represents the chat completion choice.
This is used to represent the chat completion choice in the chat completion request. It can be either a chat completion message or a chat completion chunk.
"[{\"index\": 0, \"message\": {\"role\": \"assistant\", \"content\": \"Hello! How can you help me today?\"}, \"finish_reason\": null, \"stop_reason\": null}]"
The Unix timestamp (in seconds) of when the chat completion was created.
1677652288
A unique identifier for the chat completion.
"chatcmpl-123"
The model used for the chat completion.
"meta-llama/Llama-3.3-70B-Instruct"
The object of the chat completion.
"chat.completion"
The service tier of the chat completion.
"auto"
The system fingerprint for the completion, if applicable.
"fp_44709d6fcb"
Usage statistics for the completion request.