How to Use This Section
Find the pathway that best matches your background and goals. Each pathway tells you which chapters to focus on, which to skip or skim, and how long to expect the journey to take at 8 to 12 hours per week. You can also combine pathways; see the note at the bottom of this page.
Self-Study Pathways
Not everyone will read this book cover to cover, and that is by design. The following twenty pathways are tailored to specific goals and backgrounds. Click any card to see the full chapter guide with hyperlinked chapter references.
Build AI Products
Product managers and startup founders shipping AI features
Learn: LLM APIs, prompt engineering, RAG pipelines
Fine-Tune and Train Models
ML engineers building custom models from foundation weights
Learn: Fine-tuning, PEFT/LoRA, RLHF alignment
Build AI Agents
Software engineers creating autonomous tool-using agents
Learn: Agent architectures, tool use, multi-agent systems
Deploy LLMs in Production
Platform and DevOps engineers scaling LLM services
Learn: Inference optimization, serving, observability
Research LLMs
PhD students and researchers studying language model internals
Learn: Transformer math, scaling laws, interpretability
New to ML
Career changers starting their machine learning journey
Learn: ML foundations, PyTorch basics, the full LLM stack
Data Scientist Adding LLMs
Data scientists and analysts integrating LLMs into workflows
Learn: Embeddings, synthetic data, fine-tuning for tabular tasks
NLP to LLMs
NLP professionals transitioning from classical to generative methods
Learn: Transformer architecture, tokenization, modern LLM APIs
Full-Stack AI Features
Web and app developers adding AI to existing products
Learn: LLM APIs, streaming, conversational UI, RAG
LLM Strategy for Leaders
CTOs, tech leads, and architects evaluating AI strategy
Learn: LLM landscape, cost/ROI, safety, production tradeoffs
Domain Expert Applying LLMs
Healthcare, legal, and finance professionals leveraging AI
Learn: Prompt design, RAG for domain data, safety guardrails
AI Safety and Alignment
Safety researchers and policy analysts studying LLM risks
Learn: RLHF, red-teaming, interpretability, ethics
RAG and Search Systems
Search and knowledge engineers building retrieval pipelines
Learn: Embeddings, vector databases, RAG architectures
Open-Source LLM Contributor
Developers contributing to open-source LLM frameworks
Learn: Pretraining internals, model merging, distillation
Weekend Projects
Curious tinkerers building fun projects for learning
Learn: APIs, chatbots, prompt tricks, small RAG apps
Prompt Engineer Specialist
Prompt engineers mastering systematic LLM interaction
Learn: Prompt patterns, chain-of-thought, evaluation methods
AI Infrastructure Engineer
Infrastructure engineers building GPU clusters and serving layers
Learn: Inference optimization, quantization, GPU orchestration
AI Educator / Trainer
Educators and trainers teaching AI concepts to others
Learn: Broad LLM literacy, demos, hands-on exercises
Startup CTO Building with AI
Startup CTOs and technical co-founders shipping fast
Learn: API integration, cost management, rapid prototyping
Multimodal AI Developer
Developers working with vision, audio, and text models together
Learn: Multimodal transformers, vision-language models, audio
These pathways are suggestions, not prescriptions. Many readers will combine elements of multiple pathways. An ML engineer who also wants to build agents might follow Pathway 2 first, then pivot to the agent chapters from Pathway 3. A data scientist exploring safety might pair Pathway 7 with Pathway 12. The cross-references throughout the book make it easy to jump between chapters while maintaining context.
What Comes Next
If you are an instructor or want to see complete week-by-week syllabi, proceed to FM.3: Course Syllabi. If you already know your path, jump to FM.4: How to Use This Book for a quick orientation on conventions and callout types, then start reading.