VoltageGPU Documentation
Confidential AI Infrastructure.
Deploy hardware-sealed Intel TDX GPU pods and run confidential LLM inference — all via REST API.
Base URL: https://api.voltagegpu.comConfidential Compute
Hardware-sealed Intel TDX GPU pods (H100, H200, B200). Encrypted memory, protected PCIe, on-chain attestation.
Hardware-encrypted/pods/confidential-computeConfidential Inference
OpenAI-compatible API for TEE LLMs running in Intel TDX enclaves. DeepSeek-R1-TEE, Qwen3-235B-TEE and Qwen3-32B-TEE.
From $0.15/M tokens/inferencePods API
Create, manage and attest confidential pods. SSH keys, templates, volumes and backups.
REST API/podsQuick Start
Deploy a Confidential Pod
curl https://api.voltagegpu.com/api/volt/machines # Browse TDX GPUs curl -X POST .../api/volt/pods -H "X-API-Key: ..." # Deploy (TDX-sealed) ssh root@<pod-ip> # Connect
Run Confidential Inference
curl https://api.voltagegpu.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_KEY" \
-d '{"model":"deepseek-ai/DeepSeek-R1-0528-TEE","messages":[...]}'Authentication
VoltageGPU uses two API styles depending on the product:
Volt API Confidential Pods, SSH Keys
X-API-Key: YOUR_KEY
Inference API TEE Chat Completions
Authorization: Bearer YOUR_KEY
Get your API key at voltagegpu.com/dashboard/settings
API Overview
| API | Base | Endpoints | Auth |
|---|---|---|---|
| Volt (Confidential Infrastructure) | /api/volt/* | Confidential pods, SSH keys, templates, volumes | X-API-Key |
| Inference (OpenAI-compat) | /v1/* | TEE chat completions + models list | Bearer |