Appendices
Appendix I: Prompt Template Catalog

Templates for Reasoning Models

Reasoning models perform their own internal chain-of-thought, so the prompting strategy differs from standard models. Keep prompts concise and direct. Avoid telling the model how to think; instead, specify what you want clearly.

Reasoning Model: Direct Problem Solving

For reasoning models, less is more. State the problem clearly and let the model's built-in reasoning process work. Code Fragment I.8.1 below puts this into practice.

User Message
// Reasoning Direct user template
// Replace {{placeholders}} with your actual values before sending
{{problem_statement}}

Provide your final answer at the end, clearly labeled.
// Reasoning Analysis user template
// Replace {{placeholders}} with your actual values before sending
Analyze the following {{subject}}:

{{content}}

Provide your analysis covering:
1. {{aspect_1}}
2. {{aspect_2}}
3. {{aspect_3}}

Conclude with a clear recommendation and confidence level (high/medium/low).
Code Fragment I.8.1: Presents a complex problem directly to a reasoning model, relying on its internal chain-of-thought rather than explicit prompting scaffolding.
Tip

Do NOT use chain-of-thought instructions ("think step by step") with reasoning models. They already think step by step internally. Adding such instructions can actually degrade performance by interfering with the model's native reasoning process. Instead, focus on making the problem specification clear and complete.

Reasoning Model: Complex Analysis

For multi-faceted analysis tasks, structure the desired output format rather than the reasoning process.

User Message
Code Fragment I.8.2: Asks a reasoning model to perform multi-faceted analysis, leveraging its ability to self-organize a structured response without external formatting hints.
Reasoning Model Caveats

Reasoning models use "thinking tokens" that count toward token usage but are not always visible in the API response. This can make them 5 to 20 times more expensive per query than standard models. Use reasoning models selectively for tasks that genuinely require deep reasoning (math, complex logic, multi-step planning) and use standard models for simpler tasks like classification, extraction, or summarization.

Quick Reference: Choosing a Template

Quick Reference: Choosing a Template Comparison
Task Template Recommended Temperature Model Type
ClassificationZero-shot or Few-shot0.0Standard
SummarizationStructured Summary0.0 to 0.3Standard
Data ExtractionJSON Extraction0.0Standard (with JSON mode)
Q&A (RAG)RAG Q&A0.0 to 0.2Standard
Code GenerationCode with Spec0.0 to 0.2Standard or Reasoning
Math / LogicChain-of-Thought0.0 (standard) or default (reasoning)Reasoning preferred
Creative WritingCustom system prompt0.7 to 1.0Standard
Complex AnalysisReasoning AnalysisDefault (1.0)Reasoning
The Meta-Principle of Prompt Engineering

The best prompt is the one you have tested on your actual data. These templates are starting points, not finished products. Build an evaluation set of 20 to 50 representative inputs, measure output quality systematically, and iterate. Small wording changes can produce large quality differences, so treat prompt development as an empirical process, not a one-shot exercise.