
VoltageGPU CLI
Deploy and manage confidential Intel TDX GPU pods from the terminal
What is VoltageGPU?
VoltageGPU is a Confidential AI Infrastructure platform. Every pod runs inside an Intel TDX hardware enclave with encrypted memory, Protected PCIe and on-chain attestation — data is sealed from the host operator and the hypervisor by design.
The CLI is a thin wrapper over the Volt REST API, giving you one-command access to confidential pod deploys, live inventory, SSH key management and balance tracking.
Quick Start
# Install the CLI pip install voltagegpu-cli # Configure your API key export VOLT_API_KEY="your_api_key_here" # List available confidential inventory (Intel TDX) volt cc inventory # Deploy a confidential pod volt cc deploy --name my-secure-pod --resource h200-small # SSH into your pod ssh <workload-uid>@ssh.deployments.targon.com
Get your API key at voltagegpu.com/dashboard
Features
Confidential Pod Management
- Deploy and destroy Intel TDX pods
- Live inventory with
volt cc inventory - Hardware-attested enclaves by default
Any Docker Image
- PyTorch, TensorFlow, JAX, vLLM
- Bring your own private registry image
- No template required for confidential deploys
SSH Key Management
- Add and manage SSH keys
- Secure access to your pods
Cost Tracking
- Real-time balance monitoring
- Per-pod cost breakdown
- Usage history
Python SDK
- Full programmatic access
- Async support
- Type hints included
Installation
From PyPI (Recommended)
pip install voltagegpu-cli
From Source
git clone https://github.com/Jabsama/VOLTAGEGPU-CLI.git cd VOLTAGEGPU-CLI pip install -e .
Requirements
- Python 3.8+
- pip
Configuration
Option 1: Environment Variable (Recommended)
export VOLT_API_KEY="your_api_key_here"
Option 2: Configuration File
Create ~/.volt/config.ini:
[api] api_key = your_api_key_here
Option 3: Pass Directly
from volt import VoltageGPUClient client = VoltageGPUClient(api_key="your_api_key_here")
Usage
Pods
# List all your pods (confidential pods included) volt pods list # Get pod details volt pods get <pod_id> # Stop (= destroy enclave) a running confidential pod volt pods stop <pod_id> # Delete a pod volt pods delete <pod_id> --yes # Get SSH command for a pod volt pods ssh <pod_id>
To create a pod, use the dedicated volt cc deploy command below — confidential pods are the only pod type exposed publicly.
Templates
Curated Confidential Compute templates shipped by the provider (Jupyter, Ubuntu, DeepSeek-R1, GLM-4.6, Kimi-K2, Qwen3-Coder, GPT-OSS-120B, MiniMax, etc.). Deploy directly with volt cc deploy --template TPL-UID.
# List all curated templates volt templates list # Filter by type (RENTAL or SERVERLESS) volt templates list --type RENTAL # Machine-readable output volt templates list --json # Show the full manifest for one template volt templates get <tpl-uid>
SSH Keys
# List your SSH keys volt ssh-keys list # Add a new SSH key from file volt ssh-keys add --name "my-laptop" --file ~/.ssh/id_ed25519.pub # Add a new SSH key directly volt ssh-keys add --name "my-key" --key "ssh-ed25519 AAAA..." # Delete an SSH key volt ssh-keys delete <key_id>
Machines
Lists all Confidential GPU machine tiers with live pricing (Intel TDX hardware only — RTX 4090 and other non-confidential tiers are excluded). Equivalent to volt cc inventory but with a flat table format and ready for scripting.
# List available Confidential machines volt machines list # JSON output volt machines list --json
Confidential Compute
Deploy hardware-attested Intel TDX GPU pods:
# List available Confidential Compute inventory volt cc inventory # Option A — deploy with an explicit image + resource volt cc deploy --name my-secure-pod --resource h200-small --image ubuntu:22.04 volt cc deploy --name training --resource b200-small --image pytorch/pytorch:2.5.1-cuda12.4-cudnn9-devel # Option B — deploy from a curated template volt cc deploy --name deepseek-infer --template tpl-5npuuq70m1uo # Override the template image while keeping the resource + commands volt cc deploy --name my-pod --template tpl-xxx --image my-registry/custom:latest # JSON output volt cc inventory --json
Confidential pods use Docker images directly — no template required. SSH access runs through the SSH gateway: ssh <workload-uid>@ssh.deployments.targon.com.
Confidential pods cannot be paused. volt pods stop destroys the enclave and releases the resource, same as volt pods delete.
Account
# Check your balance volt account balance # Get account information volt account info
JSON Output
All list commands support --json for machine-readable output:
volt pods list --json | jq '.[] | select(.status == "running")'
Python SDK
Basic Usage
from volt import VoltageGPUClient
# Initialize client
client = VoltageGPUClient()
# List pods (confidential pods included)
pods = client.list_pods()
for pod in pods:
print(f"{pod.name}: {pod.status} ({pod.gpu_type})")
# Destroy a confidential pod (stop == delete for TDX)
client.stop_pod(pod_id)Available Methods
| Method | Description |
|---|---|
list_pods() | List all your pods (confidential included) |
get_pod(pod_id) | Get pod details |
stop_pod(pod_id) | Release a confidential pod (destroys the enclave) |
delete_pod(pod_id) | Delete a pod |
list_ssh_keys() | List your SSH keys |
add_ssh_key(name, public_key) | Add a new SSH key |
delete_ssh_key(key_id) | Delete an SSH key |
list_templates() | List curated Confidential Compute templates |
get_template(uid) | Fetch a single template with its full manifest |
list_machines() | List Confidential GPU machine tiers with live pricing |
list_confidential_inventory() | Raw confidential inventory (alias for machines) |
create_confidential_pod(name, …) | Deploy a pod (accepts template_uid or explicit resource_name + image) |
get_balance() | Get account balance |
get_account_info() | Get account information |
Under the hood, create_confidential_pod() calls POST /api/volt/pods with { "provider": "confidential", ... }. See the API Reference for the raw REST contract.
Examples
Deploy a confidential inference pod
# Pick a resource from live inventory volt cc inventory # Deploy volt cc deploy \ --name llm-server \ --resource h200-small \ --image ghcr.io/my-org/inference:latest # SSH in ssh <workload-uid>@ssh.deployments.targon.com
Batch Operations
# Stop all running pods (destroys the enclaves)
volt pods list --json | jq -r '.[] | select(.status == "RUNNING") | .id' | xargs -I {} volt pods stop {}Environment Variables
| Variable | Description | Default |
|---|---|---|
VOLT_API_KEY | Your VoltageGPU API key | - |
VOLT_BASE_URL | API base URL | https://api.voltagegpu.com/api |
Documentation
Contributing
We welcome contributions! Please see our Contributing Guide for details.
# Clone the repository git clone https://github.com/Jabsama/VOLTAGEGPU-CLI.git cd VOLTAGEGPU-CLI # Install development dependencies pip install -e ".[dev]" # Run tests pytest
Links
Confidential AI, straight from your terminal
Made by the VoltageGPU Team