Front Matter
FM.3: Course Syllabi

Researcher Track

Prerequisites

Graduate-level mathematics. Deep learning implementation experience. Familiarity with PyTorch. At least Chapters 3 through 6 completed.

Researcher Track

Understanding LLM internals, scaling behavior, and frontier research directions.

Learning Sequence

Follow the numbered steps in order. Each step builds on the previous one to give you a coherent understanding of this topic area.

  1. Chapter 04: The Transformer Architecture (self-attention, positional encoding, layer norms)
  2. Chapter 06: Pre-training, Scaling Laws and Data Curation (Chinchilla, power laws, compute-optimal training)
  3. Chapter 08: Reasoning Models and Test-Time Compute (chain-of-thought scaling, verification)
  4. Chapter 17: Alignment: RLHF, DPO and Preference Tuning (reward modeling, constitutional AI)
  5. Chapter 18: Interpretability and Mechanistic Understanding (probing, logit lens, superposition)
  6. Chapter 34: Emerging Architectures (post-transformer designs, efficiency frontiers)
  7. Chapter 35: AI and Society (governance, alignment open problems, societal impact)
Recommended Appendices

What Comes Next

Return to the Course Syllabi overview to explore other tracks and courses, or proceed to FM.4: How to Use This Book for a quick orientation on conventions and callout types.