"The question is no longer how big can we build it, but how long should we let it think."
Frontier, Patiently Pondering AI Agent
Chapter Overview
For years, the recipe for better language models was straightforward: train bigger models on more data. The scaling laws of Kaplan and Hoffmann formalized this, showing smooth, predictable improvement as training compute increased. Then, in late 2024, a new paradigm arrived. OpenAI's o1 demonstrated that investing compute at inference time, letting a model "think longer" before answering, could match or surpass models trained with orders of magnitude more compute. DeepSeek followed with R1, an open-weight reasoning model that revealed how reinforcement learning alone, without any supervised chain-of-thought data, could teach a model to reason step by step.
This chapter consolidates and expands on the reasoning model material introduced in Section 07.3, providing a dedicated, deep treatment of the test-time compute paradigm. We begin with the conceptual shift from train-time to test-time scaling (Section 8.1), then survey the major reasoning model architectures (Section 8.2). Section 8.3 dives into the training techniques that make reasoning models possible, including RLVR, GRPO, and process reward models. Section 8.4 provides practical guidance for prompting and deploying reasoning models in production. Finally, Section 8.5 addresses the compute-optimal inference problem and the benchmarks used to evaluate reasoning capabilities.
Recent breakthroughs show that LLMs can improve their outputs by "thinking longer" at inference time. Understanding chain-of-thought reasoning, test-time compute scaling, and verification strategies is increasingly central to building reliable AI systems, especially the agent architectures covered in Part VI.
Prerequisites
- Chapter 04: Transformer architecture (attention mechanism, feed-forward layers)
- Chapter 05: Decoding strategies (greedy, beam search, sampling methods)
- Chapter 06: Pre-training, scaling laws, and compute-optimal training
- Chapter 07: Modern LLM landscape (recommended but not strictly required)
- Basic familiarity with reinforcement learning concepts (reward, policy, optimization)
Learning Objectives
- Explain the paradigm shift from train-time to test-time compute scaling and identify the conditions under which each strategy is preferable
- Compare the architectures and training methods of major reasoning models: OpenAI o1/o3/o4-mini, DeepSeek R1, Gemini 2.5, and QwQ
- Describe RLVR, GRPO, and their roles in training reasoning behavior without supervised chain-of-thought data
- Distinguish Process Reward Models (PRMs) from Outcome Reward Models (ORMs) and explain when each is appropriate
- Apply effective prompting strategies for reasoning models, including budget control and structured output extraction
- Evaluate the cost/benefit trade-offs of test-time compute using the compute-optimal inference framework
- Navigate reasoning model benchmarks (AIME, MATH-500, ARC-AGI, SWE-bench) and interpret results critically
Sections
- 8.1 The Test-Time Compute Paradigm The paradigm shift from train-time to test-time scaling. Adaptive compute allocation, Snell et al. compute-optimal inference, and the new scaling frontier.
- 8.2 Reasoning Model Architectures: o1, o3, R1, QwQ Survey of major reasoning models: OpenAI o-series, DeepSeek R1, Gemini 2.5, QwQ. Architecture patterns, thinking tokens, hidden vs visible reasoning traces.
- 8.3 Training Reasoning Models: RLVR, GRPO, PRM RLVR, GRPO, process reward models, outcome reward models, STaR bootstrapping, and synthetic reasoning data generation.
- 8.4 Prompting and Using Reasoning Models Decision frameworks, prompting differences, budget tokens, structured output, best-of-N sampling, cost management, and common pitfalls.
- 8.5 Compute-Optimal Inference and Evaluation Compute-optimal frontier, MCTS for language, reasoning benchmarks (AIME, MATH-500, ARC-AGI, SWE-bench), cost analysis, and research frontiers.
- 8.6 Formal and Verifiable Reasoning with Proof Assistants LLMs for formal theorem proving. LeanDojo and retrieval-augmented proving. miniF2F and ProofNet benchmarks. AlphaProof and RL for formal mathematics. Self-play and expert iteration approaches.
What's Next?
In the next chapter, Chapter 09: Inference Optimization, we cover the quantization, batching, and serving techniques that make LLMs practical in production.
