Value of Information

Lecture

Author

Dr. James Doss-Gollin

Published

Monday, February 23, 2026

1 The Umbrella Problem

Consider the simplest possible decision under uncertainty. You’re heading out the door and must decide whether to bring an umbrella:

  • Bringing the umbrella costs you some hassle: \(C\)
  • If it rains and you don’t have one, you get soaked: loss \(L\)
  • The probability of rain is \(p\)

The expected cost of bringing the umbrella is simply \(C\) (you pay regardless of weather). The expected cost of leaving it is \(pL\) (you pay the loss only when it rains, which happens with probability \(p\)).

The optimal rule is: bring the umbrella whenever \(C < pL\), i.e., when

\[ p > \frac{C}{L} \tag{1}\]

The ratio \(C/L\) is the cost-loss ratio. It’s the probability threshold above which protection is worthwhile. If protection is cheap relative to the potential loss, you should protect even at low probabilities.

To make this concrete, suppose \(C = \$5\) and \(L = \$50\), so \(C/L = 0.1\). The payoff table is:

Bring umbrella Don’t bring
Rain (\(p\)) \(-5\) \(-50\)
No rain (\(1-p\)) \(-5\) \(0\)

For what values of \(p\) would you leave the umbrella at home? From Equation 1, you need \(p < C/L = 5/50 = 0.10\). Only if the chance of rain is below 10% is it worth skipping the umbrella.

Now suppose a weather oracle offers to tell you for certain whether it will rain. With perfect information, you bring the umbrella only when it rains — you protect when needed and avoid the hassle when it’s dry.

  • Without the oracle: you either always protect (\(C\)) or always gamble (\(pL\)), whichever is cheaper
  • With the oracle: you pay \(C\) only a fraction \(p\) of the time

How much should you pay the oracle? That question — the maximum you’d pay for perfect information before making your decision — is the expected value of perfect information (EVPI).

2 Expected Value of Perfect Information

The core idea of VOI is simple: compare your optimized decision with the information to your optimized decision without it. The difference is what the information is worth.

For EVPI, this means comparing two ways to make a decision. Let \(a\) denote the action (e.g., dike height), \(s\) the uncertain scenario, \(p(s)\) the probability of scenario \(s\), and \(U(a, s)\) the payoff from taking action \(a\) in scenario \(s\).

Without information (“average then optimize”), you must choose the best action before learning which scenario is true. You evaluate each action by averaging its payoff over all scenarios, then pick the best one:

\[ \max_a \int U(a, s) \, p(s) \, ds \tag{2}\]

With perfect information (“optimize then average”), you choose the best action after learning which scenario is true. For each scenario, you pick the action that maximizes payoff in that scenario. Since you don’t know in advance which scenario you’ll observe, you take the expectation:

\[ \int \max_a \, U(a, s) \, p(s) \, ds \tag{3}\]

The EVPI is the gap between Equation 3 and Equation 2:

\[ \text{EVPI} = \int \max_a \, U(a,s) \, p(s) \, ds \;-\; \max_a \int U(a,s) \, p(s) \, ds \tag{4}\]

This is always \(\geq 0\). The reason is Jensen’s inequality: the max function is convex, so \(E[\max] \geq \max[E]\). Intuitively, information can never make you worse off — you can always ignore it and revert to your prior decision.

The EVPI has a clean interpretation: it is the maximum you should pay a clairvoyant. If the EVPI is small, don’t bother gathering more information. If it’s large, invest in learning.

2.1 Worked Example

Return to the umbrella with \(p = 0.30\), \(C = 5\), \(L = 50\). Here the scenarios are discrete (rain or no rain), so the integrals in Equation 4 become sums.

Without information (Equation 2): bring the umbrella (cost \(= 5\)), since \(5 < 0.3 \times 50 = 15\).

With perfect information (Equation 3):

  • When it rains (prob 0.3): bring umbrella, cost \(= 5\)
  • When it doesn’t rain (prob 0.7): leave it, cost \(= 0\)
  • Expected cost \(= 0.3 \times 5 + 0.7 \times 0 = 1.50\)

\[\text{EVPI} = 5.00 - 1.50 = \$3.50\]

You’d pay up to $3.50 for a perfect rain forecast. The savings come entirely from avoiding unnecessary protection on dry days.

2.2 When is EVPI Zero?

EVPI \(= 0\) when the same action is optimal under all scenarios. In that case, learning which scenario is true wouldn’t change your decision.

Example: if \(p = 0.50\) and \(C/L = 0.1\), you bring the umbrella regardless — a forecast wouldn’t change anything.

This is an important conceptual point: uncertainty and information value are not the same thing. You can have enormous uncertainty about which scenario will occur but zero EVPI, if the uncertainty doesn’t affect which action is best. Conversely, a small amount of uncertainty can have high EVPI if you’re right on the decision boundary.

3 Generalizing: The Cost-Loss Framework

The umbrella is a special case of a general protect-or-not problem studied by Murphy et al. (1985). Any time you face a binary choice — protect (cost \(C\)) or gamble on a loss (\(L\)) — the cost-loss ratio \(C/L\) determines the decision threshold, and the EVPI measures the value of resolving uncertainty before acting.

4 VOI Requires Probabilities

Notice that every VOI calculation requires a probability distribution \(p(s)\) over the scenarios. The integrals in Equation 4 are only defined if you can write down \(p(s)\). Without probabilities, these quantities are undefined.

This is a real limitation. In many climate risk problems, we face deep uncertainty: situations where we cannot confidently assign probabilities to the scenarios. We might disagree about the right probability model, or the range of plausible futures might be so wide that any single distribution feels arbitrary.

When that happens, VOI breaks down — and we need different tools. That is exactly what we’ll tackle in Week 8: robustness and decision-making under deep uncertainty, where the goal shifts from “optimize given probabilities” to “find strategies that perform acceptably across many possible futures.”

5 Looking Ahead

Wednesday: We’ll extend EVPI to imperfect information (EVII) using decision trees and Bayes’ rule, then introduce Global Sensitivity Analysis — a complementary tool that asks “which uncertainty affects the output?” rather than “which uncertainty affects the decision?”

Friday (Lab 6): You’ll compute EVPI and EVII for a homeowner deciding how high to elevate their house above the floodplain, where the key uncertainty is which sea-level rise pathway the world follows.

5.1 References

Murphy, A. H., Katz, R. W., Winkler, R. L., & Hsu, W.-R. (1985). Repetitive Decision Making and the Value of Forecasts in the CostLoss Ratio Situation: A Dynamic Model. Monthly Weather Review, 113(5), 801–813. https://doi.org/10.1175/1520-0493(1985)113<0801:RDMATV>2.0.CO;2