About This Section
This section covers industry applications, deployment concepts, safety topics, and practical patterns for building with LLMs. Each entry links to the relevant application or production chapter.
- Constitutional AI (CAI)
- An alignment approach from Anthropic where a model critiques and revises its own outputs according to a set of written principles (a "constitution"). CAI reduces the need for human feedback at scale by automating the preference-labeling step of alignment.
- See Section 17.4 (Constitutional AI and Beyond)
- Function Calling (Tool Use)
- A capability that lets LLMs generate structured requests to external tools, APIs, or functions. The model outputs JSON specifying which function to call and with what arguments. Function calling is the mechanism that transforms a chatbot into an agent that can take actions in the world.
- See Section 10.2 (Structured Outputs) and Section 22.2 (Tool Use)
- JSON Mode (Structured Output)
- An API feature that constrains the model to output valid JSON, essential for tool calling, data extraction, and any application requiring machine-parseable responses. Most major API providers support JSON mode or schema-constrained generation.
- See Section 10.2 (Structured Outputs)
- Knowledge Graph
- A structured representation of entities and their relationships stored as (subject, predicate, object) triples. Knowledge graphs augment RAG systems by providing structured context and enabling multi-hop reasoning over connected facts.
- See Section 20.5 (Advanced RAG Patterns)
- Prompt Engineering
- Crafting inputs to an LLM to elicit desired outputs without changing model weights. Techniques include system prompts, few-shot examples, chain-of-thought, role-playing, and structured output formatting. Prompt engineering is the fastest and cheapest way to customize LLM behavior.
- See Chapter 11 (Prompt Engineering)
- Prompt Injection
- An attack where adversarial text in the input causes the model to ignore its system instructions and follow attacker-supplied directives. Prompt injection is one of the most significant security challenges for LLM applications and has no complete solution yet.
- See Section 32.1 (Security and Safety)
- RAG (Retrieval-Augmented Generation)
- A technique that retrieves relevant documents from an external knowledge base and includes them in the LLM's context before generation. RAG grounds responses in factual data, reduces hallucinations, and enables knowledge updates without retraining. It is the most widely deployed LLM application pattern.
- See Chapter 20 (Retrieval-Augmented Generation)
- System Prompt
- A special message at the beginning of a conversation that sets the model's role, behavior constraints, and output format. System prompts are the primary mechanism for controlling LLM behavior in applications, acting as persistent instructions for the entire conversation.
- See Section 11.1 (Prompt Engineering Basics)
- Zero-Shot Learning
- The ability of a model to perform a task it was not explicitly trained for, using only a natural language description with no examples. Zero-shot capability is an emergent property of scale and is what makes LLMs useful as general-purpose tools without task-specific training data.
- See Section 11.1 (Prompt Engineering Basics)
* * *
A Living Document
The LLM field evolves rapidly. New terms emerge with each major model release and research cycle. This glossary covers the core vocabulary needed to read and understand the material in this book. For the latest terminology, consult the original research papers and official documentation referenced in each chapter's bibliography.
What Comes Next
Continue to Appendix G: GPU Hardware and Cloud Compute for the next reference appendix in this collection.