Get Started
Get Started
Learn how to use Atoma’s Cloud API for AI inference
Atoma provides a fully compatible OpenAI API that allows you to access decentralized AI capabilities while maintaining privacy and security. This guide will help you get started with using the API.
Authentication
All API requests require authentication using a bearer tokenm, you can get your API key from the Atoma Dashboard. Include your API key in the Authorization
header:
Base URL
All API endpoints are accessible through the base URL:
Available Endpoints
Atoma provides the following main endpoints:
Standard Endpoints (OpenAI Compatible)
- Chat Completions:
/v1/chat/completions
- Embeddings:
/v1/embeddings
- Image Generations:
/v1/images/generations
- Models:
/v1/models
Confidential Computing Endpoints
- Confidential Chat:
/v1/confidential/chat/completions
- Confidential Embeddings:
/v1/confidential/embeddings
- Confidential Images:
/v1/confidential/images/generations
Nodes
- Node Registration:
/v1/nodes
- Node Lock Creation:
/v1/nodes/lock
The nodes endpoints allow for:
- Node registration and public address updates in the network
- Creating node locks for confidential computing
- Managing node public keys for encrypted communication
Key Features
-
OpenAI Compatibility
- Drop-in replacement for OpenAI API
- Supports all standard OpenAI endpoints
- Compatible with existing OpenAI client libraries
-
Privacy and Security
- Support for confidential computing
- Trusted Execution Environments (TEEs)
- End-to-end encryption for sensitive data
-
Distributed Infrastructure
- Global network of compute nodes
- Intelligent load balancing
- High availability and fault tolerance
Quickstart with SDKs
Atoma provides official SDKs for easy integration: