Building Conversational AI with LLMs and Agents
Appendix I

Prompt Template Catalog

Ready-to-use templates for classification, extraction, reasoning, code generation, and more

A colorful kitchen where a robot chef assembles prompts like recipes, combining template ingredients from labeled containers into a beautifully plated output dish
Big Picture

This appendix is a ready-to-use library of prompt templates organized by task type: classification, summarization, information extraction, question answering, code generation, chain-of-thought reasoning, system prompts for common roles, and templates designed for reasoning models. Each template is presented in a copy-paste format with placeholders clearly marked, along with notes on when to use it and how to adapt it.

Prompt engineering is simultaneously an art and a craft. Having tested, reusable templates saves significant time and prevents common mistakes like missing role separators, ambiguous instructions, or output format conflicts. The templates here encode patterns that consistently produce reliable outputs across the major model families. They are a starting point, not a final answer: every deployment context requires adaptation and evaluation.

This appendix serves anyone who writes prompts regularly: engineers building LLM-powered features, analysts automating document workflows, researchers running evaluations, and practitioners who want a reference of battle-tested patterns rather than starting from scratch each time.

The templates here put into practice the principles taught in Chapter 11 (Prompt Engineering), which explains why these patterns work and how to reason about prompt design. The API usage patterns that wrap these templates are in Chapter 10 (LLM APIs). For RAG-specific prompt patterns combining retrieval context with generation, see Chapter 20 (RAG).

Prerequisites

Reading Chapter 11 (Prompt Engineering) before using this appendix will help you understand why the templates are structured as they are and enable you to adapt them effectively. If you are already experienced with prompt engineering, you can use this appendix directly as a reference library. Basic familiarity with the API you are calling (covered in Chapter 10) is assumed.

When to Use This Appendix

Use this appendix when you are building a new LLM feature and want a reliable starting template rather than designing from scratch. It is also useful as a quick reference when iterating on prompts: compare your current prompt against the canonical template structure to identify structural issues. For chain-of-thought and reasoning tasks, Section I.6 and I.8 are particularly relevant when working with reasoning-capable models. For systematic evaluation of prompt variants, pair this appendix with Chapter 29 (Evaluation).

Sections