Documentation

Confidential Pods

A pod on VoltageGPU is a hardware-sealed Intel TDX enclave with dedicated GPU access. Memory is encrypted, PCIe is protected and deployment is attested — data is isolated from the host operator and the hypervisor by design.

30-60 sec deployment
Isolated containers
Global availability

What is a Confidential Pod?

A confidential pod is a hardware-sealed computing environment with dedicated GPU access running inside an Intel TDX enclave — memory is encrypted and attestation is verifiable.

Pod Architecture

Each pod runs on Intel TDX hardware with direct access to NVIDIA GPUs. Memory inside the enclave is AES-encrypted, PCIe traffic to the GPU is protected, and every deployment produces a fresh hardware attestation.

Dedicated GPU Access

Full or fractional GPU allocation with CUDA, cuDNN, and TensorRT pre-installed.

Hardware-sealed Isolation

Intel TDX encrypts pod memory in use; the host operator and hypervisor cannot inspect it.

Root Access

Full root access via SSH or web terminal. Install any packages you need.

Persistent Storage

NVMe storage persists across restarts. Attach external volumes for large datasets.

Available Confidential GPU Types

Every GPU runs inside an Intel TDX enclave with hardware attestation. Pick the class that fits your confidential workload.

GPU ModelVRAMBest ForStarting Price
B200192GB HBM3eLatest gen, confidential training at scaleTDX sealed
H200141GB HBM3eFlagship, large confidential models, multi-GPUTDX sealed
H100 80GB80GB HBM3Confidential transformer inference, FP8TDX sealed
Live inventory & pricing

Every pod runs inside an Intel TDX hardware enclave. Query live inventory and authoritative per-hour prices with GET /api/volt/machines — what you see on the dashboard is the same feed.

Quick Start

Deploy your first GPU pod in under 60 seconds with these simple steps.

1. Create an Account

Sign up at voltagegpu.com/register and get $5 free credit with code HASHCODE-voltage-665ab4.

2. Add SSH Key

Navigate to Dashboard → SSH Keys and add your public SSH key for secure access.

3. Browse Confidential Inventory

Go to Browse Pods or call GET /api/volt/machines to see live Intel TDX inventory, per-hour prices and available resource_name values.

4. Deploy

Pick a resource, choose a Docker image (or reuse a saved template), attach your SSH key, and deploy. Your balance is debited one hour upfront.

5. Connect via SSH

ssh <workload-uid>@ssh.deployments.targon.com
Web Terminal Available

Access your pod directly from the browser at Your Pods → [Pod Name] → Terminal without needing a local SSH client.

Frequently Asked Questions

What happens if my pod crashes?

Your local volume data (/root) remains intact. External volumes (/mnt) are always preserved. You can restart the pod or deploy a new one and restore from backup.

Can I use custom Docker images?

Yes! Add your Docker credentials in Dashboard → Docker Credentials, then select your custom image when creating a pod. Images must be publicly accessible or use authenticated registries.

How is billing calculated?

Billing is per-hour based on the GPU type and number of GPUs rented. Billing starts when the pod is deployed and stops when terminated. Stopped pods do not incur GPU charges.

Can I attach multiple GPUs?

Yes, if the node supports it. When renting, you can select the number of GPUs (up to the node's total). CPU, memory, and storage scale proportionally with GPU count.

Is my data secure?

Each pod runs in an isolated container with its own network namespace. Storage is encrypted at rest. SSH access requires your private key. We never access your data.

What software is pre-installed?

Templates include CUDA, cuDNN, Python, and framework-specific packages (PyTorch, TensorFlow, JAX). You have root access to install anything else via apt or pip.

Ready to Deploy Your First Pod?

Get started with $5 free credit. No credit card required.