Back to Book Notes

📚 AI-Generated Notes: These notes were generated by AI using highlights I exported from my Kindle. They're a quick reference, not a substitute for reading the book.

Noise book cover
•

Noise

by Daniel Kahneman, Olivier Sibony, Cass R. Sunstein

Noise

Three-Sentence Summary

  1. Decision-making errors stem equally from bias and noise, yet noise receives less attention
  2. Humans struggle with forecasting; mechanical models marginally outperform human judgment
  3. Averaging multiple independent assessments effectively reduces variability in judgments

Key Quotes

“Wherever there is judgment, there is noise—and more of it than you think.”

“Complexity and richness do not generally lead to more accurate predictions.”

“Pundits blessed with clear theories about how the world works were the most confident and the least accurate.”

Definitions

Bias: Systematic errors where judgments deviate consistently from true values

Noise: Scatter in judgments; measurable without knowing actual target value

System Noise: Variability across judges for decisions that should be identical

Level Noise: Different judges’ average judgment severity varies

Pattern Noise: Judges respond differently to identical cases (largest noise source)

Occasion Noise: Same judge produces varying judgments at different times

Noise vs. Bias

System noise occurs across many fields. Example: judges grant parole more frequently early in the day or post-meal.

Noise remains measurable without knowing true target values. Even single decisions contain noise—a shooter’s hand tremor means shots could land elsewhere.

Assessment & Judgment Quality

People evaluate their own judgment accuracy through internal completion signals rather than external information. Mean squared error measurement treats positive/negative errors equally while disproportionately penalizing large errors.

Both bias and noise equally undermine accuracy. Reducing either improves performance. Zero scatter represents optimal judgment, regardless of systematic bias. Many unverifiable judgments require assessing the quality of underlying reasoning processes.

Group Dynamics

Early decision-makers influence larger populations through chance variation. Social influences reduce diversity without lowering collective error. Jury deliberation paradoxically increases noise. Individual differences constitute the largest systemic noise source.

Predictive Judgment Performance

Humans demonstrate weak predictive abilities—simple mechanical rules surpass human judgment. Hiring preferences barely exceed chance when predicting actual job performance.

Illusion of validity: Confidence in comparing two candidates exceeds ability to predict which performs better.

Additional complexity doesn’t improve forecasting accuracy. People systematically underestimate their actual ignorance. Those with confident theories proved least accurate.

Models consistently outperform humans, though marginally. Advanced AI models show only slight improvements over simpler approaches. Stock market movements rarely remain unexplained due to unlimited causal explanations.

Noise Sources

Heuristics & Biases

Substituting easier questions for difficult ones creates systematic errors

Confirmation Bias

Disregarding contradictory evidence and underweighting subsequent data

Substitution

Answering simpler questions instead (mood-based life satisfaction ratings)

Scales

Comparing cases exceeds ability to place them on absolute scales. Scale upper-end ambiguity generates unavoidable noise. Relative value perception surpasses absolute value assessment.

Nine Judgment Improvement Strategies

  1. Aggregate Independent Assessments – Averaging diverse, complementary judgments reduces system noise (not bias)

  2. Recruit High-Ability Judges – Mental ability predicts judgment quality

  3. Select Open-Minded Judges – “Actively open minded thinking” (seeking contradictory evidence) predicts forecasting success

  4. Monitor Information Intake – Additional information doesn’t guarantee improvement

  5. Incorporate Base Rates – Superior forecasters systematically examine base rates

  6. Embrace Perpetual Beta – Top forecasters continuously refine beliefs

  7. Prioritize Relative Over Absolute Judgments – Comparing team members reduces pattern noise versus isolated grading

  8. Structure Decisions – Decompose judgments into separate, independent components

  9. Defer Intuitive Choices – Evidence-based intuition surpasses snap judgments