Prerequisites

  1. Clone the Atoma Node repository:
git clone https://github.com/atoma-network/atoma-node
cd atoma-node

Start services

To learn more about the inference services, refer to the technical reference.

You can run the node either in confidential compute mode (if you have a TEE enabled node) with:

# Build and start all services
COMPOSE_PROFILES=chat_completions_vllm,embeddings_tei,image_generations_mistralrs,confidential docker compose up

# Only start one service
COMPOSE_PROFILES=chat_completions_vllm,confidential docker compose up

# Run in detached mode
COMPOSE_PROFILES=chat_completions_vllm,embeddings_tei,image_generations_mistralrs,confidential docker compose up -d

Otherwise, you can run the node in non-confidential mode with:

# Build and start all services
COMPOSE_PROFILES=chat_completions_vllm,embeddings_tei,image_generations_mistralrs,non-confidential docker compose up

# Only start one service
COMPOSE_PROFILES=chat_completions_vllm,non-confidential docker compose up

# Run in detached mode
COMPOSE_PROFILES=chat_completions_vllm,embeddings_tei,image_generations_mistralrs,non-confidential docker compose up -d