The best reference material is the kind you keep reaching for long after you have finished the book.
A Pragmatic Educator
Part Overview
The appendices provide essential reference material organized into five groups. Foundations and Setup covers mathematical prerequisites, ML essentials, Python tooling, environment configuration, and version control. Reference Materials includes the glossary, hardware guides, model cards, prompt templates, and benchmark catalogs. Framework Guides offers hands-on introductions to HuggingFace, LangChain, LangGraph, CrewAI, LlamaIndex, Semantic Kernel, and DSPy. Infrastructure and MLOps covers experiment tracking, inference serving, distributed computing, and containerization. Finally, Ecosystem Overview maps the broader LLM tooling landscape.
Appendices: 22 (Appendix A through V) covering reference material, framework tutorials, and infrastructure guides.
These appendices serve two purposes: they provide prerequisite refreshers you can consult before diving into specific chapters, and they offer practical framework guides you will return to when building real projects. Think of them as the book's toolbox.
Foundations and Setup
Linear algebra, probability, calculus, and information theory essentials for understanding LLM internals.
Learning paradigms, loss functions, optimization, regularization, and evaluation metrics.
Essential libraries, virtual environments, Jupyter/Colab setup, and common scripting patterns.
Step-by-step guides for configuring your local development environment and cloud workspaces.
Version control for code, data, and experiments with Git, DVC, and reproducibility best practices.
Reference Materials
Definitions of key terms, acronyms, and concepts used throughout the book.
GPU selection, cloud provider comparison, and cost optimization strategies.
Quick-reference cards for major LLMs with selection criteria and comparison tables.
Ready-to-use prompt templates for common tasks: summarization, classification, extraction, and more.
Curated list of datasets, evaluation benchmarks, and leaderboard resources for LLM research.
Framework Guides
Practical guide to the HuggingFace ecosystem: loading models, fine-tuning with Trainer, and sharing on the Hub.
Building LLM applications with LangChain's chain composition, agent framework, and retrieval integration.
Graph-based agent orchestration with persistent state, branching, and human-in-the-loop patterns.
Multi-agent collaboration with role-based agents, tasks, and crew orchestration.
Data connectors, indexing strategies, and query engines for building RAG applications.
Microsoft's SDK for integrating LLMs into enterprise applications with plugins and planners.
Declarative prompt programming with automatic optimization and modular pipeline composition.
Infrastructure and MLOps
Logging experiments, comparing runs, and managing model artifacts with Weights & Biases and MLflow.
High-throughput LLM serving with continuous batching, PagedAttention, and production deployment patterns.
Scaling data pipelines and training across clusters with PySpark, Databricks, and Ray.
Containerizing LLM applications with Docker, GPU passthrough, and orchestration with Docker Compose.
Ecosystem Overview
A survey of the broader LLM tooling landscape: evaluation platforms, guardrail libraries, observability tools, and emerging frameworks.
What Comes Next
Return to the Table of Contents to explore specific chapters, or visit the Front Matter for reading pathways tailored to your background.