This book in 3 sentences
- When it comes to decision errors, bias gets all the press – but noise is an equally bad problem.
- People are lousy at prediction, and models are only slightly better (but it still makes sense to use them).
- One of the best ways to reduce noise is to take the average of independent judgments (think wisdom of the crowds).
Top 3 Quotes
- "Wherever there is judgment, there is noise—and more of it than you think."
- “Complexity and richness do not generally lead to more accurate predictions.”
- “Pundits blessed with clear theories about how the world works were the most confident and the least accurate.”
Bias - error where judgements are systematically off target (must know true value to measure it)
Noise - the amount of scatter of judgements (can be measured without knowing true value)
System noise - noise in judgements that should be identical across judges
Level noise - the variability in the average level of judgments by different judges.e.g. some judges are more severe
Pattern noise -the variability in judges’ responses to particular cases (typically the largest source of noise)
Occasion noise - variability between judgements of the same judge - e.g. a basketball player shooting free throws
Noise vs. Bias
In many areas, judgements that should be identical across judges are very noisy. This is system noise.
E.g. hungry judges - judges have been found more likely to grant parole at the beginning of the day or after a food break.
You can measure noise without seeing the true value of the target.
Noise exists even in singular decisions… the shooter’s unsteady hand implies that a single shot could have landed somewhere else.
We assess judgements our own judgements based on internal signals
We assess the accuracy of our judgements based on an internal signal of judgment completion, unrelated to any outside information.
The standard way to measure error is the mean of squared errors (MSE)… it yields the sample mean as an unbiased estimate of the population mean, treats positive and negative errors equally, and disproportionately penalizes large errors.
Bias is always bad and reducing it always improves accuracy. Noise is equally bad and reducing noise is always an improvement.
The best amount of scatter is zero, even when the judgments are clearly biased.
Many judgments are unverifiable… we must assess the quality of the thought process that produces them.
Groups amplify noise
”chance variation in a small number of early movers” can have major effects in tipping large populations
Social influences are a problem because they reduce “group diversity without diminishing the collective error.”
Even jury deliberation increases noise.
The stable pattern noise that such individual differences produce is generally the single largest source of system noise.
People aren't very good at prediction – models aren't that much better
People aren’t very good at predictions…. simple mechanical rules are superior to human judgment.
E.g. People make a lot of hiring mistakes - the probability that a preferred candidate would end up with a higher performance rating is barely better than chance. (Page 112)
The illusion of validity: you can often be quite confident in your assessment of which of two candidates looks better, but guessing which of them will actually be better is much harder.
Complexity and richness do not generally lead to more accurate predictions.
People who engage in predictive tasks underestimate their objective ignorance.
Pundits blessed with clear theories about how the world works were the most confident and the least accurate.
Models are consistently better than people, but not much better.
Denial of ignorance: People believe in the predictability of events that are in fact unpredictable.
AI Models are only slightly better than simple models.
Causes can be drawn from an unlimited reservoir of facts and beliefs about the world… few large movements of the stock market remain unexplained.
Noise Has Several Causes
Heuristics & Biases: A heuristic for answering a difficult question X is to find the answer to an easier one.
Confirmation bias causes us to disregard conflicting evidence and assign less importance than we should to subsequent data.
Substitution can also be a source of occasion noise. If asked a question on life satisfaction, we might answer by consulting our immediate mood (much easier than evaluating our whole life).
Scales: Our ability to compare cases is much better than our ability to place them on a scale.
The lack of clarity on the upper end of the scale makes some noise inevitable.
People are much more sensitive to the relative value of comparable goods than to their absolute value.
How to Improve Judgements
1. Obtain independent judgments, then aggregate – Averaging independent judgments (e.g. wisdom of the crowds) is guaranteed to reduce system noise (but not bias). Works best when judges have diverse and complementary skills.
2. Pick smart judges – If you must pick people to make judgments, picking those with the highest mental ability makes a lot of sense.
3. Pick open-minded judges –The only measure of cognitive style or personality found to predict forecasting performance is "actively open minded thinking". This means actively searching for information that contradicts your preexisting hypotheses.
4. Watch out for bias – More information is not always better.
5. Factor in base rates – The best forecasters systematically look for base rates.
6. Think perpetual beta – The best forecasters are committed to continuously updating and improving their beliefs.
7. Aim to make relative, not absolute judgements – You are less likely to be inconsistent (and to create pattern noise) when you compare the performance of two members of your team than when you separately give each one a grade.
8. Structure your judgments – Break decisions down into several independent tasks.
9. Delay your intuition – An intuitive choice that is informed by a balanced and careful consideration of the evidence is far superior to a snap judgment.