POST
/
v1
/
chat
/
completions
import { AtomaSDK } from "atoma-sdk";

const atomaSDK = new AtomaSDK({
  bearerAuth: process.env["ATOMASDK_BEARER_AUTH"] ?? "",
});

async function run() {
  const completion = await atomaSDK.chat.create({
    messages: [
      {"role": "developer", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Hello!"}
    ],
    model: "meta-llama/Llama-3.3-70B-Instruct"
  });

  console.log(completion.choices[0]);
}

run();

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
messages
object[]
required

A list of messages comprising the conversation so far

model
string
required

ID of the model to use

frequency_penalty
number | null

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far

function_call
any

Controls how the model responds to function calls

functions
any[] | null

A list of functions the model may generate JSON inputs for

logit_bias
object | null

Modify the likelihood of specified tokens appearing in the completion

max_tokens
integer | null

The maximum number of tokens to generate in the chat completion

n
integer | null

How many chat completion choices to generate for each input message

presence_penalty
number | null

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far

response_format
any

The format to return the response in

seed
integer | null

If specified, our system will make a best effort to sample deterministically

stop
string[] | null

Up to 4 sequences where the API will stop generating further tokens

stream
boolean | null
default:
false

Whether to stream back partial progress. Must be false for this request type.

temperature
number | null

What sampling temperature to use, between 0 and 2

tool_choice
any

Controls which (if any) tool the model should use

tools
any[] | null

A list of tools the model may call

top_p
number | null

An alternative to sampling with temperature

user
string | null

A unique identifier representing your end-user

Response

200 - application/json
choices
object[]
required

A list of chat completion choices.

created
integer
required

The Unix timestamp (in seconds) of when the chat completion was created.

id
string
required

A unique identifier for the chat completion.

model
string
required

The model used for the chat completion.

system_fingerprint
string | null

The system fingerprint for the completion, if applicable.

usage
object | null

Usage statistics for the completion request.