VoltageGPU Logo
Documentation

GPU Pods

Pods in VoltageGPU represent individual GPU rental units that users can lease for their computational needs. Each pod is a containerized environment that provides secure, isolated access to GPU resources through our high-performance cloud infrastructure.

30-60 sec deployment
Isolated containers
Global availability

What is a GPU Pod?

A GPU Pod is a fully isolated, containerized computing environment with dedicated GPU access, pre-installed ML frameworks, and persistent storage.

Pod Architecture

Each VoltageGPU Pod runs in an isolated Docker container with direct access to NVIDIA GPUs via the NVIDIA Container Toolkit. This provides near-native GPU performance while maintaining security isolation between users.

Dedicated GPU Access

Full or fractional GPU allocation with CUDA, cuDNN, and TensorRT pre-installed.

Secure Isolation

Each pod runs in its own namespace with network isolation and encrypted storage.

Root Access

Full root access via SSH or web terminal. Install any packages you need.

Persistent Storage

NVMe storage persists across restarts. Attach external volumes for large datasets.

Available GPU Types

Choose from consumer, professional, and datacenter GPUs based on your workload requirements.

GPU ModelVRAMBest ForStarting Price
RTX 409024GB GDDR6XInference, Fine-tuning, Development$0.39/hr
RTX 309024GB GDDR6XTraining, Inference, Budget workloads$0.29/hr
A100 40GB40GB HBM2eLarge model training, Multi-GPU$2.49/hr
A100 80GB80GB HBM2eLLM training, Research, Production$3.76/hr
H100 80GB80GB HBM3GPT training, Transformer Engine, FP8$6.62/hr
GPU Splitting Available

Many nodes support GPU splitting, allowing you to rent individual GPUs rather than the entire machine. This enables cost optimization by renting only the GPUs you need.

Quick Start

Deploy your first GPU pod in under 60 seconds with these simple steps.

1. Create an Account

Sign up at voltagegpu.com/register and get $5 free credit with code HASHCODE-voltage-665ab4.

2. Add SSH Key

Navigate to Dashboard → SSH Keys and add your public SSH key for secure access.

3. Browse Available Pods

Go to Browse Pods to see available GPU instances. Filter by GPU type, price, or region.

4. Configure & Deploy

Click "Rent Now" on your chosen pod, select a template (e.g., PyTorch), choose your SSH key, and click Deploy.

5. Connect via SSH

ssh root@your-pod-ip -p 22
Web Terminal Available

Access your pod directly from the browser at Your Pods → [Pod Name] → Terminal without needing a local SSH client.

Frequently Asked Questions

What happens if my pod crashes?

Your local volume data (/root) remains intact. External volumes (/mnt) are always preserved. You can restart the pod or deploy a new one and restore from backup.

Can I use custom Docker images?

Yes! Add your Docker credentials in Dashboard → Docker Credentials, then select your custom image when creating a pod. Images must be publicly accessible or use authenticated registries.

How is billing calculated?

Billing is per-hour based on the GPU type and number of GPUs rented. Billing starts when the pod is deployed and stops when terminated. Stopped pods do not incur GPU charges.

Can I attach multiple GPUs?

Yes, if the node supports it. When renting, you can select the number of GPUs (up to the node's total). CPU, memory, and storage scale proportionally with GPU count.

Is my data secure?

Each pod runs in an isolated container with its own network namespace. Storage is encrypted at rest. SSH access requires your private key. We never access your data.

What software is pre-installed?

Templates include CUDA, cuDNN, Python, and framework-specific packages (PyTorch, TensorFlow, JAX). You have root access to install anything else via apt or pip.

Ready to Deploy Your First Pod?

Get started with $5 free credit. No credit card required.