Part II: Understanding LLMs

Chapter 18: Interpretability & Mechanistic Understanding

"The question is not whether neural networks have interpretable structure, but whether we have the patience and ingenuity to find it."

Probe Probe, Relentlessly Curious AI Agent
Interpretability and Mechanistic Understanding chapter illustration
Figure 18.0.1: Peering inside a neural network is like giving it an X-ray: layer by layer, you start to see what the model actually learned versus what you hoped it would.

Chapter Overview

Imagine deploying a medical AI that recommends a treatment, and a doctor asks, "Why this recommendation?" You open the model's attention weights and find it fixated on a patient's zip code, not their symptoms. Without interpretability tools, you would never have caught this failure. As large language models are deployed in high-stakes applications, the question "why did the model produce this output?" becomes critical. Interpretability research aims to open the black box of transformer models, revealing the internal computations that drive predictions, the features that neurons encode, and the circuits that implement specific behaviors.

This chapter covers the full spectrum of interpretability methods for transformers. It begins with attention analysis and probing classifiers, which offer accessible entry points for understanding model internals. It then advances to mechanistic interpretability, the ambitious program of reverse-engineering neural networks at the level of individual features and circuits. The chapter also covers practical interpretability tools for debugging, model editing, and representation engineering, as well as formal attribution methods for explaining transformer predictions.

By the end of this chapter, you will be able to analyze attention patterns to understand model behavior, use probing classifiers to test what information is encoded in hidden states, apply sparse autoencoders to extract interpretable features, and employ attribution methods to explain individual predictions.

Big Picture

As LLMs become more capable, understanding what they have learned and why they produce specific outputs becomes critical. Interpretability tools like probing, attention analysis, and mechanistic interpretability complement the safety and alignment techniques in Chapters 17 and 32, helping you build systems you can trust and debug.

Prerequisites

Learning Objectives

Sections

What's Next?

In the next part, Part III: Working with LLMs, we put your LLM knowledge into practice with APIs, prompt engineering, and hybrid ML systems.