Confidential Compute

Confidential GPU Pods

Hardware-attested GPU pods powered by Intel TDX and Bittensor Subnet 4 (Targon). Your data is encrypted in use, at rest, and in transit via the Targon Virtual Machine (TVM).

Intel TDX Attestation
Encrypted in use
H200 / B200 / H100

What is Confidential Compute?

Confidential Compute pods run inside a Trusted Execution Environment (TEE) powered by Intel TDX (Trust Domain Extensions) and NVIDIA's Protected PCIe. The Targon Virtual Machine (TVM) ensures that your code and data are encrypted even while being processed by the GPU — the host operator cannot access your workload.

This is ideal for enterprise workloads involving sensitive data, regulated industries (healthcare, finance, defense), or any scenario where you need cryptographic proof that your computation was not tampered with. Every pod is hardware-attested before provisioning.

Hardware Attestation

Every pod is verified via Intel TDX remote attestation before your workload starts. Cryptographic proof that the hardware is genuine and unmodified.

Data Encryption

Memory encryption via TDX ensures data is encrypted in use, at rest, and in transit. Even the host operator and hypervisor cannot read your data.

Zero-Knowledge Compute

The infrastructure provider has zero visibility into your workload. Code, model weights, and training data remain fully private.

Available GPUs

GPUConfigvCPURAMPrice/hr
NVIDIA H200h200-small14175 GB$3.60
NVIDIA H200h200-medium28350 GB$7.20
NVIDIA H200h200-large56700 GB$14.40
NVIDIA H200h200-xlarge1121400 GB$28.80
NVIDIA H100h100-small12150 GB$2.69
NVIDIA B200b200-small16192 GB$7.50
RTX 4090rtx4090-small864 GB$0.68

Prices include 50% markup on Targon base rates. Pricing is dynamic — check live inventory via GET /api/volt/machines?confidential=true for current rates and availability.

Deploy via API

Step 1: List Inventory

Query available Confidential Compute machines and live pricing.

MethodEndpointAuth
GET/api/volt/machines?confidential=trueX-API-Key
Request
curl -X GET "https://api.voltagegpu.com/api/volt/machines?confidential=true" \
  -H "X-API-Key: YOUR_API_KEY"
Response
{
  "success": true,
  "items": [
    {
      "name": "h200-small",
      "display_name": "NVIDIA H200 - Small",
      "spec": {
        "gpu_type": "NVIDIA-H200",
        "gpu_count": 1,
        "vcpu": 14375,
        "memory": 175000
      },
      "cost_per_hour": 2.40,
      "final_price_per_hour": 3.60,
      "available": 97,
      "provider": "targon"
    }
  ]
}

Step 2: Deploy a Pod

Create a new Confidential Compute pod. Set provider to "targon".

MethodEndpointAuth
POST/api/volt/podsX-API-Key

Request Body

providerrequiredSet to "targon" for Confidential Compute
namerequiredPod name (alphanumeric, hyphens allowed)
machine_idrequiredMachine config from inventory (e.g. "h200-small")
imageoptionalDocker image (default: "ubuntu:22.04")
ssh_key_idsoptionalArray of SSH key IDs to attach
Request
curl -X POST "https://api.voltagegpu.com/api/volt/pods" \
  -H "X-API-Key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "provider": "targon",
    "name": "my-secure-pod",
    "machine_id": "h200-small",
    "image": "ubuntu:22.04",
    "ssh_key_ids": ["key_abc123"]
  }'
Response
{
  "success": true,
  "pod": {
    "id": "pod_xyz789",
    "targonUid": "wkld_abc123",
    "hourlyPrice": 3.60,
    "status": "RUNNING",
    "provider": "targon",
    "ssh_command": "ssh wkld_abc123@ssh.deployments.targon.com"
  }
}

Step 3: Check Status

Retrieve the current status and details of your pod.

MethodEndpointAuth
GET/api/volt/pods/:idX-API-Key
Request
curl -X GET "https://api.voltagegpu.com/api/volt/pods/pod_xyz789" \
  -H "X-API-Key: YOUR_API_KEY"

Step 4: Connect via SSH

Once the pod status is RUNNING, connect with the Targon SSH gateway.

ssh wkld_abc123@ssh.deployments.targon.com

Note: The SSH connection uses the targonUid (workload UID) returned in the deploy response, not the VoltageGPU pod ID.

Step 5: Stop Pod

Stop a running pod. Billing stops immediately.

MethodEndpointAuth
POST/api/volt/pods/:id/stopX-API-Key
Request
curl -X POST "https://api.voltagegpu.com/api/volt/pods/pod_xyz789/stop" \
  -H "X-API-Key: YOUR_API_KEY"

Deploy via Dashboard

Deploy Confidential Compute pods visually from the VoltageGPU dashboard.

1

Browse GPUs

Go to Browse Pods and click the Confidential Compute tab to filter Targon GPUs only.

2

Select a GPU

Choose a config (e.g. h200-small for 1x H200). Cards show live availability and pricing. Click Deploy Secure.

3

Configure and Launch

Choose a Docker image (default: ubuntu:22.04), attach your SSH key, and deploy. Your pod provisions in 30-90 seconds.

4

Connect via SSH

Use the SSH command shown in your pod card:ssh <uid>@ssh.deployments.targon.com

Deploy via CLI

Use the volt CLI for quick Confidential Compute operations.

# List available Confidential Compute GPUs
volt cc inventory

# Deploy a Confidential Compute pod
volt cc deploy --name my-secure-pod --resource h200-small --image ubuntu:22.04

# List all pods (shows Provider column: lium or CC)
volt pods list

# Stop a pod (auto-routes to correct provider)
volt pods stop <pod-id>

Standard vs Confidential Compute

FeatureStandard (Lium SN51)Confidential (Targon SN4)
ProviderLium (Bittensor SN51)Targon (Bittensor SN4)
SecurityContainer isolationIntel TDX + TVM attestation
GPU verificationValidator benchmarksGraVal CUDA verification
TemplatesLium templatesDocker images directly
SSH accessssh <pod-id>@...ssh <uid>@ssh.deployments.targon.com
Pricing markup85% on Lium base50% on Targon base
GPUs availableRTX 3090 to H200H100, H200, B200, RTX 4090
Best forGeneral AI/ML workloadsRegulated data, enterprise, privacy

Security and Attestation

Intel TDX (Trust Domain Extensions)

Intel TDX creates hardware-isolated Trust Domains (TDs) that encrypt memory at the CPU level. The hypervisor and host OS are removed from the trust boundary. Before your pod starts, a remote attestation protocol verifies the hardware is genuine Intel silicon running an unmodified TDX-enabled firmware. This attestation report is cryptographically signed and can be independently verified.

GraVal CUDA Verification

GraVal (GPU Validation) performs CUDA-level verification to confirm that the GPU advertised is the actual hardware executing your workload. This prevents GPU spoofing attacks where a provider might claim to offer an H200 but actually run your code on lesser hardware. Validators on the Targon network continuously verify compute integrity.

Bittensor Subnet 4 (Targon / Manifold Labs)

Confidential Compute is powered by Targon, Bittensor Subnet 4, built by Manifold Labs ($10.5M Series A). The network provides 1500+ H200 GPUs with hardware-level security attestation. Validators verify compute integrity via remote attestation, and miners are scored on performance, uptime, and security compliance. VoltageGPU acts as a bridge to Targon — you deploy through our platform with unified billing, SSH key management, and a single dashboard for both Standard and Confidential pods.