Create embeddings
This endpoint follows the OpenAI API format for generating vector embeddings from input text. The handler receives pre-processed metadata from middleware and forwards the request to the selected node.
Returns
Ok(Response)
- The embeddings response from the processing nodeErr(AtomaProxyError)
- An error status code if any step fails
Errors
INTERNAL_SERVER_ERROR
- Processing or node communication failures
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Body
Request object for creating embeddings
Input text to get embeddings for. Can be a string or array of strings. Each input must not exceed the max input tokens for the model
"The quick brown fox jumped over the lazy dog"
ID of the model to use.
"intfloat/multilingual-e5-large-instruct"
The number of dimensions the resulting output embeddings should have.
x >= 0
The format to return the embeddings in. Can be "float" or "base64". Defaults to "float"
"float"
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
"user-1234"
Response
Response object from creating embeddings
List of embedding objects
Individual embedding object in the response
The model used for generating embeddings
"intfloat/multilingual-e5-large-instruct"
The object type, which is always "list"
"list"
Usage statistics for the request