Pathway 6: "I'm Completely New to ML" (Career Changer)
Target audience: Career changers, bootcamp graduates, and self-taught developers entering the AI field
Goal: Build a complete mental model of the LLM stack, from basic linear algebra through production agent systems, with working code at every step.
Approach
Start at Chapter 00 and read sequentially. Do not skip chapters. Complete every lab exercise; the hands-on practice is essential for building genuine understanding. Plan for roughly two chapters per week through Part I, then one chapter per week for the remaining material.
Chapter Guide
- Start Ch 00: ML and PyTorch Foundations (your starting point) your starting point: tensors, gradients, training
- Focus Ch 01: NLP and Text Representation learn how text becomes numbers
- Focus Ch 02: Tokenization and Subword Models understand how models see words and subwords
- Focus Ch 03: Sequence Models and Attention the attention mechanism that powers everything
- Focus Ch 04: The Transformer Architecture the architecture behind every modern LLM
- Focus Ch 05: Decoding and Text Generation how models actually generate text
- Skim Ch 06: Pre-training and Scaling Laws context on how large models are trained
- Skim Ch 07: The Modern LLM Landscape survey of models you will use in practice
- Skim Ch 08: Reasoning Models and Test-Time Compute understand when reasoning models add value
- Skim Ch 09: Inference Optimization context on cost and latency tradeoffs
- Focus Ch 10: Working with LLM APIs start building with APIs right away
- Focus Ch 11: Prompt Engineering the skill you will use most often
- Skim Ch 12: Hybrid ML+LLM Architectures learn when to combine classical ML with LLMs
- Skim Ch 13: Synthetic Data Generation how to create training data with LLMs
- Skim Ch 14: Fine-Tuning Fundamentals understand how models are customized
- Skim Ch 15: PEFT (LoRA, QLoRA) efficient methods for adapting models on limited hardware
- Skip Ch 16: Knowledge Distillation and Model Merging advanced training topic; return to later
- Skim Ch 17: Alignment (RLHF, DPO) understand how models learn to follow instructions
- Skip Ch 18: Interpretability research-oriented; return to later if interested
- Focus Ch 19: Embeddings and Vector Databases store and search over your own data
- Focus Ch 20: RAG build your first knowledge-grounded app
- Focus Ch 21: Conversational AI add conversation and memory to your app
- Focus Ch 22: AI Agents build your first autonomous agent
- Skim Ch 23: Tool Use and Protocols connect agents to external tools via MCP
- Skip Ch 24: Multi-Agent Systems advanced topic; return to after gaining experience
- Skip Ch 25: Specialized Agents advanced topic; return to after gaining experience
- Skim Ch 26: Agent Safety and Production Infrastructure safety guardrails before deploying agents
- Skim Ch 27: Multimodal Models see what image and audio models can do
- Skim Ch 28: LLM Applications real-world project ideas and patterns
- Focus Ch 29: Evaluation and Experiment Design learn to measure what you build
- Skim Ch 30: Observability and Monitoring track costs and errors in your applications
- Skim Ch 31: Production Engineering understand how to deploy what you build
- Focus Ch 32: Safety, Ethics and Regulation responsible AI use is essential from day one
- Skim Ch 33: Strategy, Product and ROI understand the business side of AI products
- Optional Ch 34: Emerging Architectures survey of where the field is heading
- Optional Ch 35: AI and Society broader context on AI impact and governance
Recommended Appendices
- Appendix C: Python for LLM Development – review Python patterns used throughout the book
- Appendix D: Environment Setup – set up your development environment
- Appendix B: ML Essentials – refresh core ML concepts before diving in
What Comes Next
Return to the Reading Pathways overview to explore other pathways, or proceed to FM.4: How to Use This Book for a quick orientation on conventions and callout types, then start reading.