CUDA is NVIDIA's parallel computing platform, and it is required for GPU-accelerated deep learning. Getting the right combination of driver version, CUDA toolkit version, and PyTorch version is one of the trickiest parts of setup.
Step 1: Install the NVIDIA Driver
Download the latest driver for your GPU from the NVIDIA Driver Downloads page. On Linux, you can also install via your package manager: Code Fragment D.2.1 below puts this into practice.
# Ubuntu/Debian
sudo apt update
sudo apt install nvidia-driver-550
# Verify installation
nvidia-smi
# Verify CUDA installation
nvcc --version
# Check that PyTorch sees CUDA
python -c "import torch; print(torch.cuda.is_available())"
nvidia-smi.The nvidia-smi command should display your GPU name, driver version, and the maximum CUDA version your driver supports.
Step 2: Install CUDA Toolkit (Optional if Using Conda)
If you install PyTorch via Conda, the CUDA toolkit is bundled automatically (see Section D.3). If you prefer a system-wide CUDA installation, download it from the NVIDIA CUDA Toolkit page. Code Fragment D.2.2 below puts this into practice.
The NVIDIA driver, CUDA toolkit, and PyTorch must be compatible. PyTorch 2.5+ requires CUDA 12.1 or 12.4. If torch.cuda.is_available() returns False, the most common cause is a version mismatch. Check the PyTorch installation page for the correct command that matches your CUDA version.