Prerequisites
Completed Chapters 10 through 11 (LLM APIs, prompt engineering). Basic understanding of RAG from Chapter 20. Python with async/await.
Agent Builder Track
Building AI agents from API calls through multi-agent orchestration, with production deployment.
Learning Sequence
Follow the numbered steps in order. Each step builds on the previous one to give you a coherent understanding of this topic area.
- Chapter 10: Working with LLM APIs (authentication, streaming, structured outputs)
- Chapter 11: Prompt Engineering and Advanced Techniques (chain-of-thought, few-shot, reflection loops)
- Chapter 20: Retrieval-Augmented Generation (grounding agents in external knowledge)
- Chapter 22: AI Agent Foundations (ReAct, tool calling, sandboxed execution)
- Chapter 23: Tool Use and Protocols (MCP, function calling, structured tool schemas)
- Chapter 24: Multi-Agent Systems (supervisor patterns, debate, pipeline orchestration)
- Chapter 25: Specialized Agents (code agents, research agents, data agents)
- Chapter 26: Agent Safety and Production Infrastructure (guardrails, monitoring, sandboxing)
- Chapter 31: Production Engineering and Operations (deploying agents reliably at scale)
- Chapter 34: Emerging Architectures and Scaling Frontiers (state-space models, mixture of experts, and next-generation designs)
- Chapter 35: AI, Society and Open Problems (safety, governance, and societal implications)
Recommended Appendices
- Appendix L: LangChain – build agent chains with LangChain
- Appendix M: LangGraph – design stateful agent graphs with LangGraph
- Appendix N: CrewAI – orchestrate multi-agent crews with CrewAI
- Appendix V: Tooling Ecosystem – survey the broader agent tooling landscape
What Comes Next
Return to the Course Syllabi overview to explore other tracks and courses, or proceed to FM.4: How to Use This Book for a quick orientation on conventions and callout types.