Lecture
Wednesday, February 25, 2026
| Bring umbrella | Don’t bring | |
|---|---|---|
| Rain (\(p = 0.3\)) | \(-5\) | \(-50\) |
| No rain (\(1 - p = 0.7\)) | \(-5\) | \(0\) |
A decision tree represents a decision problem as a sequence of nodes.
You choose before nature reveals the state. The decision node comes first — this computes \(\max_a \sum_s U(a,s) \, p(s)\).
Figure 1: Decision tree without information. You choose first, then nature reveals the state.
Work from the leaves back to the root:
“Bring” branch: \[E[\text{cost}] = 0.3 \times 5 + 0.7 \times 5 = 5\]
“Don’t bring” branch: \[E[\text{cost}] = 0.3 \times 50 + 0.7 \times 0 = 15\]
Best action: bring the umbrella. Expected cost \(= \$5.00\).
Nature reveals the state before you decide. The chance node comes first — this computes \(\sum_s \max_a U(a,s) \, p(s)\).
Figure 2: Decision tree with perfect information. Nature reveals the state, then you choose.
At each decision node, pick the cheapest action:
Average over nature:
\[E[\text{cost}] = 0.3 \times 5 + 0.7 \times 0 = \$1.50\]
The EVPI is the gap between the two trees:
\[\text{EVPI} = 5.00 - 1.50 = \$3.50\]
The only difference is the order of the square and the circle — which is the order of \(\max\) and \(\sum\) in the formula.
Scaling note: imagine drawing a tree for the dike problem. The decision node fans into infinitely many heights, each into branches for sea level rise, storm surge, discount rate… The integral \(\int \max_a U(a,s) \, p(s) \, ds\) is the compact version of an infinite tree.
No real forecast is perfect. A forecast’s accuracy is characterized by two conditional probabilities:
These describe the forecast system, independent of the base rate.
To draw the imperfect-information tree, we need \(P(\text{f. rain})\) and \(P(\text{rain} \mid \text{f. rain})\) — those require Bayes’ rule.
First, the complementary likelihoods:
\[P(\text{f. dry} \mid \text{rain}) = 1 - 0.90 = 0.10, \qquad P(\text{f. rain} \mid \text{dry}) = 1 - 0.80 = 0.20\]
Then the law of total probability gives the signal probabilities:
\[ P(\text{f. rain}) = \underbrace{0.90 \times 0.30}_{0.27} + \underbrace{0.20 \times 0.70}_{0.14} = 0.41 \]
\[ P(\text{f. dry}) = 1 - 0.41 = 0.59 \]
These are the first branches of the imperfect-information tree.
We can draw the first two layers, but the nature nodes need posterior probabilities we haven’t computed yet.
Figure 3: Incomplete imperfect-information tree. The signal and decision layers are known, but the posterior probabilities at the nature nodes require Bayes’ rule.
Apply Bayes’ rule to fill in the nature nodes:
\[ P(\text{rain} \mid \text{f. rain}) = \frac{P(\text{f. rain} \mid \text{rain}) \cdot p}{P(\text{f. rain})} = \frac{0.90 \times 0.30}{0.41} = \frac{0.27}{0.41} \approx 0.66 \]
\[ P(\text{rain} \mid \text{f. dry}) = \frac{P(\text{f. dry} \mid \text{rain}) \cdot p}{P(\text{f. dry})} = \frac{0.10 \times 0.30}{0.59} = \frac{0.03}{0.59} \approx 0.05 \]
The forecast shifts the probability from the base rate of 0.30 to 0.66 (forecast rain) or 0.05 (forecast dry).
Now we can fill in every number. Three layers: Signal (chance) \(\to\) Decision (you) \(\to\) Nature (chance).
Figure 4: Decision tree with an imperfect forecast. Posterior probabilities from Bayes’ rule replace the prior at the nature nodes.
At each decision node, apply Monday’s cost-loss rule: bring the umbrella when \(P(\text{rain}) \times L > C\).
The bold branches in the tree confirm the optimal action at each decision node.
Roll back to the signal node. The expected cost along each optimal branch:
Average over the signal:
\[ E[\text{cost with forecast}] = 0.41 \times 5.00 + 0.59 \times 2.50 = 2.05 + 1.475 = \$3.55 \]
| Scenario | Expected cost |
|---|---|
| No information | \(\$5.00\) |
| Imperfect forecast | \(\$3.55\) |
| Perfect information | \(\$1.50\) |
\[ \text{EVII} = 5.00 - 3.55 = \$1.45 \]
\[ \text{EVPI} = 5.00 - 1.50 = \$3.50 \]
The forecast captures \(1.45 / 3.50 \approx 41\%\) of the perfect-information value.
The umbrella illustrates a general three-step pattern:
\[ \text{EVII} = \underbrace{\int \left[ \max_a \int U(a, s) \, p(s \mid z) \, ds \right] p(z) \, dz}_{\text{get signal, update, optimize, average}} \;-\; \underbrace{\max_a \int U(a, s) \, p(s) \, ds}_{\text{optimize with prior alone}} \]
This reduces to EVPI when the signal perfectly reveals \(s\), and to zero when the signal is pure noise (\(p(s \mid z) = p(s)\)).
A key result: \(0 \leq \text{EVII} \leq \text{EVPI}\).
An imperfect forecast can never hurt in expectation.
The relationship between forecast quality and value is nonlinear. Forecasts are most valuable near the decision boundary (\(C/L \approx p\)). When the decision is already obvious, even a much better forecast adds little value.
Dr. James Doss-Gollin