Templates in VoltageGPU are configuration blueprints that define how GPU-enabled Docker containers are created and deployed. They provide a standardized way to specify all necessary parameters for container creation.
Each template contains essential Docker configuration parameters that determine how the GPU environment will be set up, including the base image, environment variables, storage requirements, and network settings. Templates streamline the process of launching GPU environments by providing a reusable configuration that can be easily deployed.
A template in VoltageGPU consists of several required components:
Maintained by the VoltageGPU team. Verified and trusted to be secure and functional. Includes PyTorch, TensorFlow, JAX, and more.
Created by users for specific use cases. Can be public (shared with community) or private (only visible to you).
Community-contributed templates available to all users. Great for discovering new configurations and workflows.
Your personal templates visible only to your account. Perfect for proprietary configurations and internal tools.
Custom templates in VoltageGPU are user-defined configurations for GPU environments. While this flexibility is powerful, it's crucial to ensure these templates work correctly before deployment. VoltageGPU implements a robust template verification system that validates all custom templates before they can be used in production.
The verification process consists of four key steps:
Creates a container on dedicated verification servers using exact template configurations.
Monitors container startup process and tracks initialization of required services.
Verifies SSH accessibility, confirms proper credential handling, and tests connection stability.
Provides detailed error logs if verification fails with troubleshooting guidance.
To verify if your container can pass verification:
# Run your container
docker run [-e <env-var>=<value>] -d <your-docker-image>:<image-tag> <your-startup-command>
# Check if container keeps running
docker psIf your container exits immediately, check your startup command and ensure any required services are configured to run in the foreground.
| Template | Base Image | CUDA Version | Best For |
|---|---|---|---|
| PyTorch (CUDA) | daturaal/pytorch | 12.8.0 | Deep learning, Computer vision |
| TensorFlow (CUDA) | tensorflow/tensorflow | 12.4.0 | Production ML, Keras workflows |
| JAX (CUDA) | jax/jax | 12.6.0 | Research, Transformers, TPU-like |
| Ubuntu Base | nvidia/cuda | 12.8.0 | Custom setups, Flexibility |
| Jupyter Lab | jupyter/datascience | 12.4.0 | Interactive development, Notebooks |
Go to Dashboard → Templates → Create Template
Image: your-registry/your-image
Tag: latest
Registry: Docker Hub / Private Registry
Environment Variables:
CUDA_VISIBLE_DEVICES=all
PYTHONUNBUFFERED=1
Startup Command: /bin/bash -c "service ssh start && tail -f /dev/null"Click "Create & Verify" to submit your template. The verification process typically takes 2-5 minutes.
Yes! Add your Docker credentials in Dashboard → Docker Credentials, then reference your private images in templates.
Common reasons: container exits immediately, SSH not configured, image not accessible, or startup command errors. Check the verification logs for details.
Yes, you can edit your custom templates. Changes require re-verification before the updated template can be used.
Set your template visibility to "Public" when creating or editing. Public templates appear in the community templates list.
Build custom GPU environments tailored to your workflow.