Appendices
Appendix D: Development Environment Setup

D.2 CUDA and Driver Setup

CUDA is NVIDIA's parallel computing platform, and it is required for GPU-accelerated deep learning. Getting the right combination of driver version, CUDA toolkit version, and PyTorch version is one of the trickiest parts of setup.

Step 1: Install the NVIDIA Driver

Download the latest driver for your GPU from the NVIDIA Driver Downloads page. On Linux, you can also install via your package manager: Code Fragment D.2.1 below puts this into practice.

# Ubuntu/Debian
sudo apt update
sudo apt install nvidia-driver-550

# Verify installation
nvidia-smi
# Verify CUDA installation
nvcc --version

# Check that PyTorch sees CUDA
python -c "import torch; print(torch.cuda.is_available())"
Code Fragment D.2.1: Installing the NVIDIA driver on Ubuntu and verifying the installation with nvidia-smi.

The nvidia-smi command should display your GPU name, driver version, and the maximum CUDA version your driver supports.

Step 2: Install CUDA Toolkit (Optional if Using Conda)

If you install PyTorch via Conda, the CUDA toolkit is bundled automatically (see Section D.3). If you prefer a system-wide CUDA installation, download it from the NVIDIA CUDA Toolkit page. Code Fragment D.2.2 below puts this into practice.

nvcc: NVIDIA (R) Cuda compiler driver Cuda compilation tools, release 12.4, V12.4.131 True
Code Fragment D.2.2: Verifying that the CUDA toolkit is installed and that PyTorch can detect your GPU.
Warning: Version Compatibility Matrix

The NVIDIA driver, CUDA toolkit, and PyTorch must be compatible. PyTorch 2.5+ requires CUDA 12.1 or 12.4. If torch.cuda.is_available() returns False, the most common cause is a version mismatch. Check the PyTorch installation page for the correct command that matches your CUDA version.