Value of Imperfect Information

Lecture

Dr. James Doss-Gollin

Wednesday, February 25, 2026

Draft: This content is under development.

Recap: Monday’s Umbrella Problem

Bring umbrella Don’t bring
Rain (\(p = 0.3\)) \(-5\) \(-50\)
No rain (\(1 - p = 0.7\)) \(-5\) \(0\)
  • Without information: bring the umbrella, expected cost \(= \$5.00\)
  • With perfect information: expected cost \(= 0.3 \times 5 + 0.7 \times 0 = \$1.50\)
  • EVPI \(= 5.00 - 1.50 = \$3.50\)

Decision Trees

A decision tree represents a decision problem as a sequence of nodes.

  • Squares are decisions (you choose)
  • Circles are chance events (nature chooses)
  • The order of nodes matches the order of \(\max\) and \(\sum\) in the formula

Tree: No Information

You choose before nature reveals the state. The decision node comes first — this computes \(\max_a \sum_s U(a,s) \, p(s)\).

Code
digraph {
    rankdir=LR
    node [fontname="Helvetica", fontsize=11]
    edge [fontname="Helvetica", fontsize=10]

    D [shape=box, label="Decision", style=filled, fillcolor="#cde0f5"]

    NB [shape=circle, label="Nature", width=0.7, style=filled, fillcolor="#e0e0e0"]
    ND [shape=circle, label="Nature", width=0.7, style=filled, fillcolor="#e0e0e0"]

    p1 [shape=plaintext, label="-$5"]
    p2 [shape=plaintext, label="-$5"]
    p3 [shape=plaintext, label="-$50"]
    p4 [shape=plaintext, label="$0"]

    D -> NB [label="  Bring umbrella", penwidth=2.5]
    D -> ND [label="  Don't bring"]

    NB -> p1 [label="  Rain (0.3)"]
    NB -> p2 [label="  No rain (0.7)"]

    ND -> p3 [label="  Rain (0.3)"]
    ND -> p4 [label="  No rain (0.7)"]
}
D Decision NB Nature D->NB  Bring umbrella ND Nature D->ND  Don't bring p1 -$5 NB->p1  Rain (0.3) p2 -$5 NB->p2  No rain (0.7) p3 -$50 ND->p3  Rain (0.3) p4 $0 ND->p4  No rain (0.7)
Figure 1: Decision tree without information. You choose first, then nature reveals the state.

Solving the No-Information Tree

Work from the leaves back to the root:

“Bring” branch: \[E[\text{cost}] = 0.3 \times 5 + 0.7 \times 5 = 5\]

“Don’t bring” branch: \[E[\text{cost}] = 0.3 \times 50 + 0.7 \times 0 = 15\]

Best action: bring the umbrella. Expected cost \(= \$5.00\).

Tree: Perfect Information

Nature reveals the state before you decide. The chance node comes first — this computes \(\sum_s \max_a U(a,s) \, p(s)\).

Code
digraph {
    rankdir=LR
    node [fontname="Helvetica", fontsize=11]
    edge [fontname="Helvetica", fontsize=10]

    N [shape=circle, label="Nature", width=1.0, style=filled, fillcolor="#e0e0e0"]

    DR [shape=box, label="Decision", style=filled, fillcolor="#cde0f5"]
    DD [shape=box, label="Decision", style=filled, fillcolor="#cde0f5"]

    p1 [shape=plaintext, label="-$5"]
    p2 [shape=plaintext, label="-$50"]
    p3 [shape=plaintext, label="-$5"]
    p4 [shape=plaintext, label="$0"]

    N -> DR [label="  Rain (0.3)"]
    N -> DD [label="  No rain (0.7)"]

    DR -> p1 [label="  Bring", penwidth=2.5]
    DR -> p2 [label="  Don't", style=dashed, color="#999999"]

    DD -> p3 [label="  Bring", style=dashed, color="#999999"]
    DD -> p4 [label="  Don't", penwidth=2.5]
}
N Nature DR Decision N->DR  Rain (0.3) DD Decision N->DD  No rain (0.7) p1 -$5 DR->p1  Bring p2 -$50 DR->p2  Don't p3 -$5 DD->p3  Bring p4 $0 DD->p4  Don't
Figure 2: Decision tree with perfect information. Nature reveals the state, then you choose.

Solving the Perfect-Information Tree

At each decision node, pick the cheapest action:

  • Rain (prob 0.3): bring umbrella, cost \(= 5\) (vs. 50 without)
  • No rain (prob 0.7): leave umbrella, cost \(= 0\) (vs. 5 with)

Average over nature:

\[E[\text{cost}] = 0.3 \times 5 + 0.7 \times 0 = \$1.50\]

EVPI From the Trees

The EVPI is the gap between the two trees:

\[\text{EVPI} = 5.00 - 1.50 = \$3.50\]

The only difference is the order of the square and the circle — which is the order of \(\max\) and \(\sum\) in the formula.

Scaling note: imagine drawing a tree for the dike problem. The decision node fans into infinitely many heights, each into branches for sea level rise, storm surge, discount rate… The integral \(\int \max_a U(a,s) \, p(s) \, ds\) is the compact version of an infinite tree.

An Imperfect Forecast

No real forecast is perfect. A forecast’s accuracy is characterized by two conditional probabilities:

  • Hit rate: \(P(\text{f. rain} \mid \text{rain}) = 0.90\)
  • Correct rejection rate: \(P(\text{f. dry} \mid \text{dry}) = 0.80\)

These describe the forecast system, independent of the base rate.

To draw the imperfect-information tree, we need \(P(\text{f. rain})\) and \(P(\text{rain} \mid \text{f. rain})\) — those require Bayes’ rule.

Building the Tree: Signal Probabilities

First, the complementary likelihoods:

\[P(\text{f. dry} \mid \text{rain}) = 1 - 0.90 = 0.10, \qquad P(\text{f. rain} \mid \text{dry}) = 1 - 0.80 = 0.20\]

Then the law of total probability gives the signal probabilities:

\[ P(\text{f. rain}) = \underbrace{0.90 \times 0.30}_{0.27} + \underbrace{0.20 \times 0.70}_{0.14} = 0.41 \]

\[ P(\text{f. dry}) = 1 - 0.41 = 0.59 \]

These are the first branches of the imperfect-information tree.

The Incomplete Tree

We can draw the first two layers, but the nature nodes need posterior probabilities we haven’t computed yet.

Code
digraph {
    rankdir=LR
    node [fontname="Helvetica", fontsize=11]
    edge [fontname="Helvetica", fontsize=10]

    F [shape=circle, label="Signal", width=1.0, style=filled, fillcolor="#e0e0e0"]

    DR [shape=box, label="Decision", style=filled, fillcolor="#cde0f5"]
    DD [shape=box, label="Decision", style=filled, fillcolor="#cde0f5"]

    NRB [shape=circle, label="Nature", width=0.7, style=filled, fillcolor="#ffe0e0"]
    NRD [shape=circle, label="Nature", width=0.7, style=filled, fillcolor="#ffe0e0"]
    NDB [shape=circle, label="Nature", width=0.7, style=filled, fillcolor="#ffe0e0"]
    NDD [shape=circle, label="Nature", width=0.7, style=filled, fillcolor="#ffe0e0"]

    F -> DR [label="  Forecast rain (0.41)"]
    F -> DD [label="  Forecast dry (0.59)"]

    DR -> NRB [label="  Bring"]
    DR -> NRD [label="  Don't"]

    DD -> NDB [label="  Bring"]
    DD -> NDD [label="  Don't"]

    NRB -> QRB1 [label="  Rain (???)"]
    NRB -> QRB2 [label="  Dry (???)"]
    NRD -> QRD1 [label="  Rain (???)"]
    NRD -> QRD2 [label="  Dry (???)"]
    NDB -> QDB1 [label="  Rain (???)"]
    NDB -> QDB2 [label="  Dry (???)"]
    NDD -> QDD1 [label="  Rain (???)"]
    NDD -> QDD2 [label="  Dry (???)"]

    QRB1 [shape=plaintext, label="-$5"]
    QRB2 [shape=plaintext, label="-$5"]
    QRD1 [shape=plaintext, label="-$50"]
    QRD2 [shape=plaintext, label="$0"]
    QDB1 [shape=plaintext, label="-$5"]
    QDB2 [shape=plaintext, label="-$5"]
    QDD1 [shape=plaintext, label="-$50"]
    QDD2 [shape=plaintext, label="$0"]
}
F Signal DR Decision F->DR  Forecast rain (0.41) DD Decision F->DD  Forecast dry (0.59) NRB Nature DR->NRB  Bring NRD Nature DR->NRD  Don't NDB Nature DD->NDB  Bring NDD Nature DD->NDD  Don't QRB1 -$5 NRB->QRB1  Rain (???) QRB2 -$5 NRB->QRB2  Dry (???) QRD1 -$50 NRD->QRD1  Rain (???) QRD2 $0 NRD->QRD2  Dry (???) QDB1 -$5 NDB->QDB1  Rain (???) QDB2 -$5 NDB->QDB2  Dry (???) QDD1 -$50 NDD->QDD1  Rain (???) QDD2 $0 NDD->QDD2  Dry (???)
Figure 3: Incomplete imperfect-information tree. The signal and decision layers are known, but the posterior probabilities at the nature nodes require Bayes’ rule.

Building the Tree: Posterior Probabilities

Apply Bayes’ rule to fill in the nature nodes:

\[ P(\text{rain} \mid \text{f. rain}) = \frac{P(\text{f. rain} \mid \text{rain}) \cdot p}{P(\text{f. rain})} = \frac{0.90 \times 0.30}{0.41} = \frac{0.27}{0.41} \approx 0.66 \]

\[ P(\text{rain} \mid \text{f. dry}) = \frac{P(\text{f. dry} \mid \text{rain}) \cdot p}{P(\text{f. dry})} = \frac{0.10 \times 0.30}{0.59} = \frac{0.03}{0.59} \approx 0.05 \]

The forecast shifts the probability from the base rate of 0.30 to 0.66 (forecast rain) or 0.05 (forecast dry).

The Complete Tree

Now we can fill in every number. Three layers: Signal (chance) \(\to\) Decision (you) \(\to\) Nature (chance).

Code
digraph {
    rankdir=LR
    node [fontname="Helvetica", fontsize=11]
    edge [fontname="Helvetica", fontsize=10]

    F [shape=circle, label="Signal", width=1.0, style=filled, fillcolor="#e0e0e0"]

    DR [shape=box, label="Decision", style=filled, fillcolor="#cde0f5"]
    DD [shape=box, label="Decision", style=filled, fillcolor="#cde0f5"]

    NRB [shape=circle, label="Nature", width=0.7, style=filled, fillcolor="#e0e0e0"]
    NRD [shape=circle, label="Nature", width=0.7, style=filled, fillcolor="#e0e0e0"]
    NDB [shape=circle, label="Nature", width=0.7, style=filled, fillcolor="#e0e0e0"]
    NDD [shape=circle, label="Nature", width=0.7, style=filled, fillcolor="#e0e0e0"]

    p1 [shape=plaintext, label="-$5"]
    p2 [shape=plaintext, label="-$5"]
    p3 [shape=plaintext, label="-$50"]
    p4 [shape=plaintext, label="$0"]
    p5 [shape=plaintext, label="-$5"]
    p6 [shape=plaintext, label="-$5"]
    p7 [shape=plaintext, label="-$50"]
    p8 [shape=plaintext, label="$0"]

    F -> DR [label="  Forecast rain (0.41)"]
    F -> DD [label="  Forecast dry (0.59)"]

    DR -> NRB [label="  Bring", penwidth=2.5]
    DR -> NRD [label="  Don't", style=dashed, color="#999999"]

    DD -> NDB [label="  Bring", style=dashed, color="#999999"]
    DD -> NDD [label="  Don't", penwidth=2.5]

    NRB -> p1 [label="  Rain (0.66)"]
    NRB -> p2 [label="  Dry (0.34)"]

    NRD -> p3 [label="  Rain (0.66)"]
    NRD -> p4 [label="  Dry (0.34)"]

    NDB -> p5 [label="  Rain (0.05)"]
    NDB -> p6 [label="  Dry (0.95)"]

    NDD -> p7 [label="  Rain (0.05)"]
    NDD -> p8 [label="  Dry (0.95)"]
}
F Signal DR Decision F->DR  Forecast rain (0.41) DD Decision F->DD  Forecast dry (0.59) NRB Nature DR->NRB  Bring NRD Nature DR->NRD  Don't NDB Nature DD->NDB  Bring NDD Nature DD->NDD  Don't p1 -$5 NRB->p1  Rain (0.66) p2 -$5 NRB->p2  Dry (0.34) p3 -$50 NRD->p3  Rain (0.66) p4 $0 NRD->p4  Dry (0.34) p5 -$5 NDB->p5  Rain (0.05) p6 -$5 NDB->p6  Dry (0.95) p7 -$50 NDD->p7  Rain (0.05) p8 $0 NDD->p8  Dry (0.95)
Figure 4: Decision tree with an imperfect forecast. Posterior probabilities from Bayes’ rule replace the prior at the nature nodes.

Solving the Tree: Optimal Actions

At each decision node, apply Monday’s cost-loss rule: bring the umbrella when \(P(\text{rain}) \times L > C\).

  • Forecast rain (\(P(\text{rain}) = 0.66\)): \(0.66 \times 50 = 33 > 5\) \(\Rightarrow\) bring umbrella
  • Forecast dry (\(P(\text{rain}) = 0.05\)): \(0.05 \times 50 = 2.50 < 5\) \(\Rightarrow\) leave umbrella

The bold branches in the tree confirm the optimal action at each decision node.

Solving the Tree: Expected Cost

Roll back to the signal node. The expected cost along each optimal branch:

  • Forecast rain (prob 0.41): bring umbrella \(\to\) cost \(= \$5.00\)
  • Forecast dry (prob 0.59): leave umbrella \(\to\) expected cost \(= 0.05 \times 50 + 0.95 \times 0 = \$2.50\)

Average over the signal:

\[ E[\text{cost with forecast}] = 0.41 \times 5.00 + 0.59 \times 2.50 = 2.05 + 1.475 = \$3.55 \]

EVII

Scenario Expected cost
No information \(\$5.00\)
Imperfect forecast \(\$3.55\)
Perfect information \(\$1.50\)

\[ \text{EVII} = 5.00 - 3.55 = \$1.45 \]

\[ \text{EVPI} = 5.00 - 1.50 = \$3.50 \]

The forecast captures \(1.45 / 3.50 \approx 41\%\) of the perfect-information value.

The General Formula

The umbrella illustrates a general three-step pattern:

  1. Enumerate signals: each possible signal \(z\) has probability \(p(z)\)
  2. Update and optimize: for each signal, compute the posterior \(p(s \mid z)\) via Bayes’ rule, then choose the best action
  3. Average over signals

\[ \text{EVII} = \underbrace{\int \left[ \max_a \int U(a, s) \, p(s \mid z) \, ds \right] p(z) \, dz}_{\text{get signal, update, optimize, average}} \;-\; \underbrace{\max_a \int U(a, s) \, p(s) \, ds}_{\text{optimize with prior alone}} \]

This reduces to EVPI when the signal perfectly reveals \(s\), and to zero when the signal is pure noise (\(p(s \mid z) = p(s)\)).

Blackwell’s Theorem

A key result (Blackwell, 1953): \(0 \leq \text{EVII} \leq \text{EVPI}\).

An imperfect forecast can never hurt in expectation.

The relationship between forecast quality and value is nonlinear. Forecasts are most valuable near the decision boundary (\(C/L \approx p\)). When the decision is already obvious, even a much better forecast adds little value.

Summary

  1. Decision trees visualize the order of \(\max\) and \(\sum\) in expected value calculations. The EVPI is the gap between “nature first” and “decision first” trees.
  2. EVII extends this to noisy signals: Signal \(\to\) Decision \(\to\) Nature. Bayes’ rule computes posteriors; Blackwell’s theorem guarantees \(0 \leq \text{EVII} \leq \text{EVPI}\).
  3. For continuous or high-dimensional problems, trees become intractable — the integral formulation is the compact representation.

References

Blackwell, D. (1953). Equivalent comparisons of experiments. The Annals of Mathematical Statistics, 24(2), 265–272. https://doi.org/10.1214/aoms/1177729032