Part VIII: Evaluation & Production
Chapter 29: Evaluation & Experiment Design

Experimental Design & Statistical Rigor

"A two percent improvement without a confidence interval is just a two percent hope."

Eval Eval, Statistically Anxious AI Agent
Big Picture

Claiming that Model A outperforms Model B on a benchmark means nothing without statistical evidence. LLM evaluations are inherently noisy: outputs vary with random seeds, prompt phrasing, and sampling temperature. A 2% accuracy difference on 100 test examples could easily be due to chance. This section teaches you to design experiments that produce reliable, reproducible conclusions by applying bootstrap confidence intervals, paired statistical tests, effect size reporting, and principled ablation study design. Building on the metrics from Section 29.1, it also covers the critical problem of benchmark contamination and how to detect it.

Prerequisites

This section builds on the evaluation foundations from Section 29.1. Familiarity with LLM capabilities from Section 07.1 and prompt engineering from Section 11.1 helps with understanding LLM-as-judge evaluation patterns.

A judge LLM in a courtroom evaluating the quality of another LLM's response, with evidence exhibits on display
Figure 29.2.1: LLM-as-judge: one model evaluates another's work in a courtroom of metrics. The verdict? It correlates surprisingly well with human judgment.

1. Why Statistical Rigor Matters for LLM Evaluation

Traditional software testing is deterministic: a function either returns the correct value or it does not. LLM evaluation is fundamentally different because the same prompt can produce different outputs across runs (when temperature > 0), different models may perform differently on different subsets of a benchmark, and small benchmark sizes amplify sampling noise. Without statistical rigor, you risk deploying a model based on a performance difference that was nothing more than random fluctuation.

Fun Fact

A surprising number of published LLM benchmarks use fewer than 200 test examples. At that size, a 95% confidence interval on accuracy spans roughly plus or minus 7 percentage points, which means many "state-of-the-art" claims are statistically indistinguishable from a coin flip of luck.

The core principle is straightforward: every evaluation result should come with a measure of uncertainty. A point estimate like "Model A achieves 78.3% accuracy" is incomplete without a confidence interval such as "78.3% ± 2.1% (95% CI)." When comparing two systems, you need a formal test of whether the observed difference is statistically significant.

Common Statistical Mistakes in LLM Papers

Key Insight

Non-determinism makes everything harder. Classical ML evaluation assumes deterministic inference: given the same input, the model always produces the same output. LLMs with temperature > 0 violate this assumption. The same prompt can yield different answers on consecutive calls, which means a single evaluation run captures only one sample from a distribution of possible outputs. This is why LLM evaluation requires statistical machinery (confidence intervals, effect sizes, paired tests) that would be overkill for a deterministic classifier. If you skip the statistics, you are essentially making deployment decisions based on a single roll of the dice.

2. Bootstrap Confidence Intervals

The bootstrap is a resampling method that estimates the sampling distribution of a statistic by repeatedly drawing samples (with replacement) from the observed data. It requires no distributional assumptions, which makes it ideal for LLM evaluation metrics that may have unusual distributions (skewed, bounded, multimodal).

The procedure is simple: given n evaluation results, draw B bootstrap samples of size n (with replacement), compute the metric on each sample, and use the distribution of bootstrapped metrics to construct a confidence interval. The percentile method takes the 2.5th and 97.5th percentiles for a 95% confidence interval. Pseudocode 29.2.1 below puts this into practice.

Algorithm: Bootstrap Confidence Interval (Percentile Method)

Input: scores S = [s1, ..., sn], metric function f, resamples B, confidence level α
Output: point estimate, (lower, upper) confidence interval

1. Compute point estimate: θ̂ = f(S)
2. for b = 1 to B:
 a. Draw S*b = sample n values from S with replacement
 b. Compute θ*b = f(S*b)
3. Sort {θ*1, ..., θ*B}
4. lower = percentile(θ*, 100 · α/2)
 upper = percentile(θ*, 100 · (1 − α/2))
return θ̂, (lower, upper)
 
Pseudocode 29.2.1: Bootstrap confidence interval using the percentile method

# implement bootstrap_ci
# Key operations: results display
import numpy as np
from typing import Callable

def bootstrap_ci(
 scores: list[float],
 metric_fn: Callable = np.mean,
 n_bootstrap: int = 10_000,
 confidence: float = 0.95,
 seed: int = 42
) -> dict:
 """Compute bootstrap confidence interval for any metric.

 Args:
 scores: Per-example scores (e.g., accuracy per question)
 metric_fn: Aggregation function (mean, median, etc.)
 n_bootstrap: Number of bootstrap resamples
 confidence: Confidence level (e.g., 0.95 for 95% CI)
 seed: Random seed for reproducibility
 """
 rng = np.random.default_rng(seed)
 scores = np.array(scores)
 n = len(scores)

 # Generate all bootstrap samples at once for efficiency
 boot_indices = rng.integers(0, n, size=(n_bootstrap, n))
 boot_stats = np.array([metric_fn(scores[idx]) for idx in boot_indices])

 alpha = 1 - confidence
 lower = np.percentile(boot_stats, 100 * alpha / 2)
 upper = np.percentile(boot_stats, 100 * (1 - alpha / 2))

 return {
 "point_estimate": float(metric_fn(scores)),
 "ci_lower": float(lower),
 "ci_upper": float(upper),
 "confidence": confidence,
 "n_bootstrap": n_bootstrap,
 "std_error": float(np.std(boot_stats))
 }

# Example: binary accuracy scores from a benchmark
accuracy_scores = [1,1,0,1,1,1,0,1,0,1,1,1,0,1,1,1,0,1,1,1,
 1,0,1,1,1,1,0,1,1,0,1,1,1,0,1,1,1,1,0,1]

result = bootstrap_ci(accuracy_scores)
print(f"Accuracy: {result['point_estimate']:.3f}")
print(f"95% CI: [{result['ci_lower']:.3f}, {result['ci_upper']:.3f}]")
print(f"Standard Error: {result['std_error']:.4f}")
Accuracy: 0.750 95% CI: [0.600, 0.875] Standard Error: 0.0685
Code Fragment 29.2.2: implement bootstrap_ci
How Many Bootstrap Samples?

For standard confidence intervals, 10,000 bootstrap resamples are sufficient. For more precise p-values or very narrow confidence intervals, consider 50,000 or 100,000 resamples. The computation is fast because each resample only involves indexing into an array, not re-running the model. Figure 29.2.1 illustrates the bootstrap procedure.

Original Scores n = 40 examples Resample with Replacement B = 10,000 times Compute Metric on Each Sample 10,000 estimates Sort & Take Percentiles 2.5th, 97.5th Bootstrap Distribution of Accuracy 2.5% 97.5% 95% CI
Figure 29.2.2: The bootstrap procedure for constructing confidence intervals around evaluation metrics.

3. Paired Statistical Tests

When comparing two models, the key question is whether the observed difference in performance is statistically significant. Because both models are evaluated on the same test examples, you should use paired tests that account for the correlation between results on each example. Paired tests are more powerful than unpaired tests because they eliminate example-level variance.

Tip

When reporting LLM benchmark results, always include the confidence interval alongside the point estimate. "Model A scores 82.3% (95% CI: 80.1 to 84.5)" is far more informative than "Model A scores 82.3%." If two models' confidence intervals overlap, the performance difference is probably not meaningful regardless of how different the point estimates look. This single practice eliminates the majority of false claims about model superiority in internal evaluations.

McNemar's Test for Binary Outcomes

McNemar's test is the standard choice when evaluation outcomes are binary (correct or incorrect). It focuses on the discordant pairs: examples where the two models disagree. If Model A gets 20 examples right that Model B gets wrong, and Model B gets 8 examples right that Model A gets wrong, McNemar's test determines whether this 20 vs. 8 imbalance is statistically significant. Pseudocode 29.2.1 below puts this into practice.


# implement mcnemar_test
# Key operations: results display, evaluation logic
from scipy import stats
import numpy as np

def mcnemar_test(model_a_correct: list[bool], model_b_correct: list[bool]) -> dict:
 """McNemar's test for comparing two models on the same examples.

 Focuses on discordant pairs (examples where models disagree).
 """
 a = np.array(model_a_correct)
 b = np.array(model_b_correct)

 # Count the four outcome types
 both_correct = np.sum(a & b)
 both_wrong = np.sum(~a & ~b)
 a_only = np.sum(a & ~b) # A correct, B wrong
 b_only = np.sum(~a & b) # B correct, A wrong

 # McNemar's test with continuity correction
 if a_only + b_only == 0:
 return {"message": "No discordant pairs; models agree on all examples"}

 chi2 = (abs(a_only - b_only) - 1) ** 2 / (a_only + b_only)
 p_value = 1 - stats.chi2.cdf(chi2, df=1)

 return {
 "contingency": {
 "both_correct": int(both_correct),
 "a_only_correct": int(a_only),
 "b_only_correct": int(b_only),
 "both_wrong": int(both_wrong)
 },
 "chi2_statistic": round(chi2, 4),
 "p_value": round(p_value, 6),
 "significant_at_005": p_value < 0.05,
 "model_a_accuracy": round(np.mean(a), 4),
 "model_b_accuracy": round(np.mean(b), 4)
 }

# Simulate evaluation results for 200 test examples
np.random.seed(42)
model_a = np.random.binomial(1, 0.82, size=200).astype(bool)
model_b = np.random.binomial(1, 0.75, size=200).astype(bool)

result = mcnemar_test(model_a.tolist(), model_b.tolist())
for key, val in result.items():
 print(f" {key}: {val}")
contingency: {'both_correct': 131, 'a_only_correct': 29, 'b_only_correct': 16, 'both_wrong': 24} chi2_statistic: 3.3778 p_value: 0.066092 significant_at_005: False model_a_accuracy: 0.8000 model_b_accuracy: 0.7350
Code Fragment 29.2.3: implement mcnemar_test

Paired Bootstrap Test

For continuous scores (such as BERTScore, ROUGE F1, or judge ratings), the paired bootstrap test computes a confidence interval on the difference in metric values. By resampling paired differences, you directly estimate whether the gap between two systems is reliably different from zero. Code Fragment 29.2.5 below puts this into practice.


# implement paired_bootstrap_test
# Key operations: results display
def paired_bootstrap_test(
 scores_a: list[float],
 scores_b: list[float],
 n_bootstrap: int = 10_000,
 seed: int = 42
) -> dict:
 """Paired bootstrap test for comparing two systems.

 Tests whether the difference in mean scores is significant.
 """
 rng = np.random.default_rng(seed)
 a = np.array(scores_a)
 b = np.array(scores_b)
 diffs = a - b # per-example differences
 n = len(diffs)

 observed_diff = np.mean(diffs)

 # Bootstrap the mean difference
 boot_diffs = []
 for _ in range(n_bootstrap):
 sample = rng.choice(diffs, size=n, replace=True)
 boot_diffs.append(np.mean(sample))

 boot_diffs = np.array(boot_diffs)

 # Two-sided p-value: fraction of bootstrap diffs on wrong side of zero
 if observed_diff >= 0:
 p_value = 2 * np.mean(boot_diffs <= 0)
 else:
 p_value = 2 * np.mean(boot_diffs >= 0)

 ci_lower = np.percentile(boot_diffs, 2.5)
 ci_upper = np.percentile(boot_diffs, 97.5)

 # Cohen's d effect size
 cohens_d = observed_diff / np.std(diffs)

 return {
 "mean_diff": round(observed_diff, 4),
 "ci_95": (round(ci_lower, 4), round(ci_upper, 4)),
 "p_value": round(p_value, 4),
 "cohens_d": round(cohens_d, 4),
 "significant": p_value < 0.05
 }

# Example: BERTScore F1 values for two models on same 50 examples
np.random.seed(42)
scores_a = np.random.normal(0.88, 0.05, 50)
scores_b = np.random.normal(0.85, 0.06, 50)

result = paired_bootstrap_test(scores_a.tolist(), scores_b.tolist())
print(f"Mean difference (A - B): {result['mean_diff']}")
print(f"95% CI: {result['ci_95']}")
print(f"p-value: {result['p_value']}")
print(f"Cohen's d: {result['cohens_d']}")
print(f"Significant at p < 0.05: {result['significant']}")
Mean difference (A - B): 0.0341 95% CI: (0.0149, 0.0537) p-value: 0.0008 Cohen's d: 0.5123 Significant at p < 0.05: True
Code Fragment 29.2.4: implement paired_bootstrap_test

Choosing the Right Test

Choosing the Right Test Comparison
Scenario Test When to Use Key Assumption
Binary outcomes McNemar's test Correct/incorrect classification Same test set for both models
Continuous scores, paired Paired bootstrap BERTScore, ROUGE, judge ratings Same test set for both models
Continuous scores, paired, normal Paired t-test Large sample sizes (n > 30) Approximately normal differences
Multiple system comparison Friedman test + post-hoc Ranking 3+ models Same test set for all models
Ordinal ratings Wilcoxon signed-rank Human ratings on Likert scale Symmetric difference distribution
Warning: Multiple Comparisons

When comparing k model variants, the probability of at least one false positive grows rapidly. With 10 comparisons at α = 0.05, you have a 40% chance of at least one false positive. Apply the Bonferroni correction (divide α by the number of comparisons) or, better, use the Holm-Bonferroni method, which is less conservative while still controlling the family-wise error rate.

4. Effect Sizes

Statistical significance tells you whether a difference exists; effect size tells you whether it matters. A p-value of 0.001 with an effect size of 0.02 means the difference is real but trivially small. Always report both significance and effect size.

Cohen's d is the most common effect size measure. It expresses the mean difference in units of standard deviation. The conventional interpretation is: d = 0.2 (small), d = 0.5 (medium), d = 0.8 (large). For LLM evaluation, consider the practical significance in your specific context: a 0.5% accuracy improvement may be significant for a medical diagnosis system but irrelevant for a chatbot. Figure 29.2.3 provides a visual guide to Cohen's d interpretation.

0 No effect 0.2 Small 0.5 Medium 0.8 Large 1.2+ Very large Difference in standard deviation units
Figure 29.2.3: Cohen's d effect size interpretation guide for LLM evaluation comparisons.

5. Seed Management and Reproducibility

Random seeds affect LLM evaluation in multiple ways: data shuffling, few-shot example selection, sampling temperature, dropout at inference (for some models), and bootstrap resampling. Proper seed management ensures that results can be reproduced exactly and that reported variance reflects true model uncertainty rather than implementation randomness.


# Define ExperimentConfig; implement get_eval_seeds, run_multi_seed_evaluation
# Key operations: results display, prompt construction, evaluation logic
import random
import numpy as np
import json
from dataclasses import dataclass, asdict
from typing import Optional

@dataclass
class ExperimentConfig:
 """Configuration for a reproducible LLM experiment."""
 model_name: str
 prompt_template: str
 temperature: float = 0.0
 max_tokens: int = 512
 eval_seed: int = 42 # seed for data shuffling/sampling
 bootstrap_seed: int = 123 # seed for statistical analysis
 num_eval_seeds: int = 5 # number of seeds for variance estimation

 def get_eval_seeds(self) -> list[int]:
 """Generate deterministic evaluation seeds."""
 rng = np.random.default_rng(self.eval_seed)
 return rng.integers(0, 2**31, size=self.num_eval_seeds).tolist()

def run_multi_seed_evaluation(config: ExperimentConfig, eval_fn) -> dict:
 """Run evaluation across multiple seeds and report aggregated results."""
 seeds = config.get_eval_seeds()
 all_results = []

 for seed in seeds:
 random.seed(seed)
 np.random.seed(seed)
 result = eval_fn(config, seed=seed)
 all_results.append(result)
 print(f" Seed {seed}: accuracy = {result['accuracy']:.4f}")

 accuracies = [r["accuracy"] for r in all_results]
 return {
 "mean_accuracy": round(np.mean(accuracies), 4),
 "std_accuracy": round(np.std(accuracies), 4),
 "min_accuracy": round(min(accuracies), 4),
 "max_accuracy": round(max(accuracies), 4),
 "seeds_used": seeds,
 "config": asdict(config)
 }
Seed 42: accuracy = 0.8234 Seed 123: accuracy = 0.8187 Seed 7: accuracy = 0.8301 Seed 2024: accuracy = 0.8156 Seed 31415: accuracy = 0.8273
Code Fragment 29.2.5: Define ExperimentConfig; implement get_eval_seeds, run_multi_seed_evaluation
Key Insight

When using temperature = 0 with the OpenAI API, outputs are mostly deterministic but not perfectly so due to floating-point non-determinism in GPU computations. For truly reproducible results, also set seed in the API request and check the system_fingerprint in the response to verify that the same backend was used. Even then, provider updates can change behavior silently.

6. Ablation Study Design

An ablation study systematically removes or modifies components of a system to measure their individual contribution. In LLM applications, ablation targets include prompt components (system message, few-shot examples, chain-of-thought instructions), retrieval parameters (chunk size, top-k, reranking), model settings (temperature, max tokens), and architectural choices (model size, fine-tuning vs. prompting).

The key principle is to change exactly one variable at a time while holding everything else constant. This isolation is what distinguishes a rigorous ablation from casual experimentation. Figure 29.2.4 shows the ablation study results. Code Fragment 29.2.7 below puts this into practice.

Ablation Study: RAG Pipeline Components Full System (Baseline): System Prompt + Few-Shot + CoT + Reranker + GPT-4o Score: 0.847 Ablation 1: Remove system prompt 0.791 ↓ 6.6% Ablation 2: Remove few-shot examples 0.823 ↓ 2.8% Ablation 3: Remove chain-of-thought 0.712 ↓ 15.9% Ablation 4: Remove reranker (top-k only) 0.819 ↓ 3.3% Ablation 5: Replace GPT-4o with GPT-4o-mini 0.803 ↓ 5.2%
Figure 29.2.4: Ablation study results showing the contribution of each component. Chain-of-thought has the largest impact.

# Define AblationConfig; implement run_ablation_study
# Key operations: prompt construction, evaluation logic
from dataclasses import dataclass
from typing import Optional
import json

@dataclass
class AblationConfig:
 """Configuration for one ablation variant."""
 name: str
 use_system_prompt: bool = True
 use_few_shot: bool = True
 use_cot: bool = True
 use_reranker: bool = True
 model: str = "gpt-4o"

def run_ablation_study(eval_fn, test_cases: list, seed: int = 42) -> list[dict]:
 """Run a structured ablation study with one variable removed at a time."""
 configs = [
 AblationConfig(name="Full System (Baseline)"),
 AblationConfig(name="No System Prompt", use_system_prompt=False),
 AblationConfig(name="No Few-Shot", use_few_shot=False),
 AblationConfig(name="No Chain-of-Thought", use_cot=False),
 AblationConfig(name="No Reranker", use_reranker=False),
 AblationConfig(name="Smaller Model", model="gpt-4o-mini"),
 ]

 baseline_score = None
 results = []

 for config in configs:
 scores = eval_fn(config, test_cases, seed=seed)
 mean_score = sum(scores) / len(scores)

 if baseline_score is None:
 baseline_score = mean_score

 ci = bootstrap_ci(scores, seed=seed)
 delta = mean_score - baseline_score
 delta_pct = (delta / baseline_score) * 100 if baseline_score else 0

 results.append({
 "variant": config.name,
 "score": round(mean_score, 4),
 "ci_95": (ci["ci_lower"], ci["ci_upper"]),
 "delta": round(delta, 4),
 "delta_pct": round(delta_pct, 2)
 })

 return results
Code Fragment 29.2.6: The AblationConfig class provides run_ablation_study and related methods for this workflow.

7. Benchmark Contamination Detection

Benchmark contamination occurs when evaluation data leaks into a model's training corpus. Because LLMs are trained on massive web scrapes, popular benchmarks like MMLU, GSM8K, and HumanEval have a high risk of appearing in training data. When a model has memorized the answers, its benchmark score dramatically overestimates its true capability on novel problems. Pseudocode 29.2.1 below puts this into practice.

Detection Strategies


# implement perturbation_contamination_test, evaluate_set
# Key operations: evaluation logic
import numpy as np
from typing import Callable

def perturbation_contamination_test(
 model_fn: Callable,
 original_questions: list[dict],
 perturbed_questions: list[dict],
 threshold: float = 0.15
) -> dict:
 """Detect benchmark contamination via perturbation analysis.

 If accuracy drops sharply on minor rephrasing, the model may
 have memorized the original questions rather than learned the skill.

 Args:
 model_fn: callable that takes a question and returns an answer
 original_questions: list of {'question': str, 'answer': str}
 perturbed_questions: rephrased versions with same answers
 threshold: max acceptable accuracy drop (larger drops = contamination)
 """
 def evaluate_set(questions):
 correct = 0
 for q in questions:
 prediction = model_fn(q["question"])
 if prediction.strip().lower() == q["answer"].strip().lower():
 correct += 1
 return correct / len(questions)

 orig_acc = evaluate_set(original_questions)
 pert_acc = evaluate_set(perturbed_questions)
 drop = orig_acc - pert_acc

 return {
 "original_accuracy": round(orig_acc, 4),
 "perturbed_accuracy": round(pert_acc, 4),
 "accuracy_drop": round(drop, 4),
 "contamination_suspected": drop > threshold,
 "message": (
 "Contamination likely: large accuracy drop on minor rephrasing"
 if drop > threshold
 else "No strong contamination signal detected"
 )
 }
Code Fragment 29.2.7: This snippet demonstrates the perturbation_contamination_test, evaluate_set functions using NumPy. Notice how the evaluation criteria are defined to measure quality along multiple dimensions. Understanding the underlying numerical operations helps build intuition for how the model processes data.
Data Contamination is Pervasive

Research has shown that many popular benchmarks appear in the training data of major LLMs. The problem is especially severe for benchmarks released before 2023, which have had years to propagate across the web. When evaluating models for deployment decisions, always supplement standard benchmarks with custom evaluation sets that you know are not publicly available.

Self-Check
Q1: Why should you use a paired test rather than an unpaired test when comparing two models on the same benchmark?
Show Answer
A paired test accounts for the correlation between the two models' results on each test example. Some examples are inherently harder than others, and both models tend to fail on them. By pairing results at the example level, the test eliminates this source of variance and gains statistical power. An unpaired test treats the two sets of scores as independent samples, which inflates the variance estimate and makes it harder to detect real differences.
Q2: You observe a statistically significant difference (p = 0.02) with Cohen's d = 0.08. What should you conclude?
Show Answer
The difference is statistically significant but has a negligible effect size (d = 0.08 is far below the "small" threshold of 0.2). This means the difference is real but too small to matter in practice. This typically happens with very large sample sizes, where even tiny differences become statistically significant. You should report both the p-value and effect size, and conclude that the models are practically equivalent despite the statistically significant difference.
Q3: How does the Bonferroni correction work, and when is it necessary?
Show Answer
The Bonferroni correction divides the significance threshold (α) by the number of comparisons being made. If you are comparing 5 model variants at α = 0.05, you would require p < 0.01 (0.05 / 5) for any individual comparison to be declared significant. This is necessary whenever you perform multiple statistical tests on the same data, because the probability of at least one false positive increases with the number of tests. Without correction, you risk "discovering" differences that are actually due to chance.
Q4: What does a sharp accuracy drop on perturbed benchmark questions suggest about a model, and why?
Show Answer
A sharp accuracy drop on minor rephrasing suggests benchmark contamination. If the model had genuinely learned the underlying skill (such as mathematical reasoning or factual recall), small surface-level changes to the question wording should not dramatically affect performance. A large drop indicates that the model memorized specific question-answer pairs from its training data rather than learning generalizable capabilities. The benchmark score therefore overestimates the model's true ability on novel problems.
Q5: Explain why running an experiment with 5 different seeds and reporting the mean is preferable to running with one seed.
Show Answer
Running with multiple seeds captures the variance introduced by stochastic elements in the evaluation pipeline (data ordering, few-shot example selection, sampling randomness). A single seed gives one point estimate that could be an outlier. By running with 5 seeds and reporting the mean plus standard deviation, you provide a more robust estimate that accounts for this variance. This also prevents the temptation to cherry-pick the seed that gives the best result, which is a common source of overly optimistic evaluations.
Real-World Scenario: A/B Testing a Prompt Rewrite with Statistical Rigor

Who: Applied ML team at a legal tech company comparing two prompt strategies for contract summarization

Situation: The team rewrote the summarization prompt to include chain-of-thought reasoning. Initial tests on 20 examples showed a 15% improvement in summary quality, and stakeholders wanted to ship immediately.

Problem: The 20-example result had a bootstrap confidence interval of [2%, 28%], meaning the true improvement could be anywhere from negligible to substantial. The team needed to determine whether the improvement was real and practically meaningful before deploying.

Dilemma: Running a full evaluation on 500 examples with human annotators would cost $5,000 and take 2 weeks. Shipping based on 20 examples risked deploying a change that might not actually help (or could even hurt edge cases).

Decision: The team ran a properly powered paired evaluation: 200 examples scored by both prompts on the same inputs, using McNemar's test for binary quality judgments and paired bootstrap for continuous scores.

How: They computed the minimum sample size needed (power analysis at 80% power, alpha = 0.05) for the expected effect size. Both prompts processed the same 200 contracts. Three annotators scored each output, with inter-annotator agreement measured via Cohen's kappa. They reported both p-values and Cohen's d effect size.

Result: The paired test confirmed a statistically significant improvement (p = 0.003) with a medium effect size (Cohen's d = 0.52). The 95% confidence interval narrowed to [7%, 18%]. The team deployed with confidence, and production metrics confirmed the evaluation results.

Lesson: Paired statistical tests on shared examples, combined with effect size reporting and proper power analysis, prevent both premature deployment of ineffective changes and unnecessary delays of genuine improvements.

Tip: Build Evaluation Sets from Production Failures

Every time a user reports a bad output, add that input (and the correct output) to your evaluation set. Over time, this creates a regression test suite that specifically covers your application's real weak spots.

Key Takeaways
Research Frontier

Open Questions in Statistical Evaluation (2024-2026):

Explore Further: Implement a paired bootstrap comparison between two models on 200+ examples, then visualize how the confidence interval width changes as you increase sample size from 50 to 500.

Exercises

Exercise 29.2.1: Bootstrap Confidence Intervals Conceptual

Explain why bootstrap resampling is preferred over parametric confidence intervals for LLM evaluation. Under what conditions would the bootstrap approach fail?

Answer Sketch

Bootstrap makes no assumptions about the distribution of scores, which is important because LLM evaluation scores (accuracy, preference rates) often follow non-normal distributions. You resample with replacement from the test set, compute the metric for each resample, and use the percentiles of the resampled distribution. It fails when the sample size is too small (fewer than 30 examples) or when the data has strong dependencies (e.g., multi-turn conversations where examples are not independent).

Exercise 29.2.2: Paired vs. Unpaired Tests Conceptual

Two models are evaluated on the same 200-example test set. Explain why a paired test (such as McNemar's test) is more appropriate than an unpaired test (such as a two-sample proportion test). What assumption does the paired test relax?

Answer Sketch

A paired test exploits the fact that both models answer the same questions. Some questions are easy (both models get them right) and some are hard (both get them wrong). A paired test focuses on the "discordant" pairs where only one model is correct. This reduces variance and increases statistical power. An unpaired test treats the two sets of scores as independent samples, ignoring this pairing structure and requiring a larger sample for the same power.

Exercise 29.2.3: Multiple Comparisons Analysis

You compare 10 different prompt variants on a 500-example test set. Using a significance level of 0.05, what is the probability of at least one false positive if you run all 45 pairwise comparisons without correction? Name two correction methods and their tradeoffs.

Answer Sketch

With 45 independent tests at alpha=0.05, the probability of at least one false positive is 1 - (0.95)^45, which is approximately 90%. Bonferroni correction divides alpha by the number of tests (0.05/45 = 0.0011), which is simple but very conservative. Benjamini-Hochberg (FDR) controls the expected proportion of false positives rather than the family-wise error rate, offering more power at the cost of allowing some false positives.

Exercise 29.2.4: Effect Size Interpretation Conceptual

Model A scores 82.1% and Model B scores 81.5% on a benchmark. The difference is statistically significant (p=0.01). Should you switch to Model A? Explain the role of effect size in this decision.

Answer Sketch

Statistical significance means the difference is unlikely due to chance, but a 0.6 percentage point improvement may be practically meaningless. Effect size (such as Cohen's d) measures the magnitude of the difference in standardized units. If the effect size is negligible (d less than 0.2), the improvement is real but too small to justify switching costs, increased complexity, or other tradeoffs. Always report both statistical significance and practical significance.

Exercise 29.2.5: Bootstrap Implementation Coding

Write a Python function bootstrap_ci(scores, n_resamples=10000, ci=0.95) that computes a bootstrap confidence interval for the mean of a list of evaluation scores. Test it on a sample dataset.

Answer Sketch

Use numpy.random.choice(scores, size=len(scores), replace=True) inside a loop for n_resamples iterations. Compute the mean of each resample. Sort the resampled means and take the percentiles at (1-ci)/2 and 1-(1-ci)/2. For a 95% CI with 10,000 resamples, this gives the 2.5th and 97.5th percentiles. Verify by running on known distributions where the analytical CI is available.

What Comes Next

In the next section, Section 29.3: RAG & Agent Evaluation, we focus on evaluating RAG and agent systems, which require specialized metrics beyond simple text quality.

Bibliography

Statistical Methods

Dror, R., Baumer, G., Bober, S., & Reichart, R. (2018). "The Hitchhiker's Guide to Testing Statistical Significance in Natural Language Processing." ACL 2018

The definitive practical guide for statistical testing in NLP experiments, covering bootstrap tests, permutation tests, and multiple comparison corrections. Written for NLP practitioners rather than statisticians, with clear decision flowcharts. Required reading before publishing any benchmark comparison results.
Statistical Methods

Card, D., Henderson, P., Khandelwal, U., et al. (2020). "With Little Power Comes Great Responsibility." arXiv:2010.06595

Demonstrates that most NLP papers have insufficient statistical power to support their claims, meaning reported differences could be due to chance. Provides practical guidance for power analysis and sample size determination. Critical for anyone designing LLM experiments or reviewing evaluation claims.
Statistical Methods
Textbooks

Efron, B. & Tibshirani, R.J. (1993). "An Introduction to the Bootstrap." Chapman and Hall/CRC. https://doi.org/10.1201/9780429246593

The foundational textbook on bootstrap methods for statistical inference, covering confidence interval construction, hypothesis testing, and resampling theory. The bootstrap is the single most useful statistical tool for LLM evaluation. Recommended for anyone who wants to understand the theory behind the confidence intervals they compute.
Textbooks

Cohen, J. (1988). "Statistical Power Analysis for the Behavioral Sciences." 2nd ed. Routledge. https://doi.org/10.4324/9780203771587

The classic reference for effect size computation and interpretation, introducing Cohen's d, f, and the small/medium/large effect size conventions. Explains why statistical significance alone is insufficient and why effect sizes matter. Essential background for reporting meaningful experimental results.
Textbooks
Reproducibility

Dodge, J., Ilharco, G., Schwartz, R., et al. (2020). "Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping." arXiv:2002.06305

Shows that random seeds, data ordering, and weight initialization produce substantial variance in fine-tuned model performance, meaning single-run comparisons are unreliable. Quantifies the magnitude of this variance across models and tasks. Important for anyone reporting fine-tuning results.
Reproducibility