"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."
Antoine de Saint-Exupery
Part Overview
Part III shifts from understanding LLMs to using them effectively. You will learn to work with provider APIs (OpenAI, Anthropic, Google, open-source), master prompt engineering techniques from basic to advanced (chain-of-thought, few-shot, system prompts), and design hybrid systems that combine traditional ML with LLM capabilities for production use cases.
Chapters: 3 (Chapters 10 through 12). Builds on the model knowledge from Part II and prepares you for training and adaptation in Part IV.
Theory becomes practice here. Part III equips you with the hands-on skills to call LLM APIs, craft effective prompts, and architect hybrid systems that combine classical ML with LLM capabilities for real-world production workloads.
Hands-on guide to LLM provider APIs: authentication, chat completions, streaming, structured outputs, function calling, cost management, and building robust API clients.
The art and science of getting the best outputs from LLMs: zero-shot, few-shot, chain-of-thought, self-consistency, ReAct, and systematic prompt optimization frameworks.
When to use LLMs, when to use classical ML, and when to combine both. Covers LLM-as-judge patterns, routing, cascading, cost modeling, and architectural decision frameworks.
What Comes Next
Continue to Part IV: Training and Adapting.