Global Sensitivity Analysis

Lecture (Remote)

Dr. James Doss-Gollin

Wednesday, February 25, 2026

Draft: This content is under development.

Recap: Value of Information

Today

  1. Recap: Value of Information

  2. Local Sensitivity Analysis

  3. Global Sensitivity Analysis

  4. Example: Dike Cost Sensitivity

  5. Practical Considerations

  6. Summary

Monday’s Key Ideas

  1. EVPI measures how much better you could decide with perfect information
  2. EVPXI identifies which uncertainties matter most for the decision
  3. Information has value only when it changes which action is best

Today: a complementary tool that asks a different question.

Two Different Questions

Sensitivity analysis: which inputs most affect the output?

Value of information: which inputs most affect the decision?

These are not the same thing.

A parameter can dominate the output variance but have zero EVPI — if it doesn’t change which action is optimal.

Local Sensitivity Analysis

Today

  1. Recap: Value of Information

  2. Local Sensitivity Analysis

  3. Global Sensitivity Analysis

  4. Example: Dike Cost Sensitivity

  5. Practical Considerations

  6. Summary

One-at-a-Time (OAT)

The simplest approach: vary one input while holding all others at their baseline values.

\[ S_i^{\text{local}} \approx \frac{\partial f}{\partial x_i} \bigg|_{x = x_0} \]

Problems:

  • Only measures sensitivity at one point in input space
  • Misses interactions between parameters
  • The “baseline” is arbitrary — sensitivity can change elsewhere

Lab 5 Was OAT

In Lab 5, you optimized under RCP 8.5 and then under RCP 2.6 — one scenario at a time.

That’s one-at-a-time sensitivity analysis!

What you couldn’t see:

  • How much of the total output variability is due to SLR vs. other parameters?
  • Do SLR and discount rate interact (i.e., does the effect of SLR depend on the discount rate)?

Global Sensitivity Analysis

Today

  1. Recap: Value of Information

  2. Local Sensitivity Analysis

  3. Global Sensitivity Analysis

  4. Example: Dike Cost Sensitivity

  5. Practical Considerations

  6. Summary

The Idea

Instead of wiggling one input at a fixed baseline, vary all inputs simultaneously across their full ranges.

Then decompose the output variance:

How much of the variability in the output is attributable to each input?

This accounts for:

  • The full range of each input (not just a local derivative)
  • Nonlinear effects
  • Interactions between inputs

Variance Decomposition

For a model \(Y = f(X_1, X_2, \ldots, X_k)\), we can decompose the total variance:

\[ \text{Var}(Y) = \sum_i V_i + \sum_{i<j} V_{ij} + \cdots + V_{12\ldots k} \]

where:

  • \(V_i = \text{Var}\bigl(E[Y \mid X_i]\bigr)\) — variance due to \(X_i\) alone
  • \(V_{ij}\) — variance due to the interaction of \(X_i\) and \(X_j\)
  • Higher-order terms capture three-way interactions, etc.

Sobol Indices

First-order index — fraction of variance due to \(X_i\) alone:

\[ S_i = \frac{V_i}{\text{Var}(Y)} = \frac{\text{Var}\bigl(E[Y \mid X_i]\bigr)}{\text{Var}(Y)} \]

Total-order index — fraction of variance involving \(X_i\) (including all interactions):

\[ S_{T_i} = 1 - \frac{\text{Var}\bigl(E[Y \mid X_{\sim i}]\bigr)}{\text{Var}(Y)} \]

where \(X_{\sim i}\) means all inputs except \(X_i\).

Interpreting Sobol Indices

Pattern Interpretation
\(S_i\) large, \(S_{T_i} \approx S_i\) \(X_i\) has a strong main effect, few interactions
\(S_i\) small, \(S_{T_i}\) large \(X_i\) matters mainly through interactions
\(S_i\) small, \(S_{T_i}\) small \(X_i\) doesn’t matter much at all
\(\sum S_i \approx 1\) Model is mostly additive (interactions are weak)

Computing Sobol Indices

The standard approach (Saltelli et al., 2010):

  1. Define input distributions for each uncertain parameter
  2. Generate two independent sample matrices \(A\) and \(B\) (e.g., \(N \times k\))
  3. Create “cross” matrices \(A_B^{(i)}\) by swapping column \(i\) of \(A\) with column \(i\) of \(B\)
  4. Evaluate the model on all matrices
  5. Estimate \(S_i\) and \(S_{T_i}\) from the sample covariances

This requires \(N(k+2)\) model evaluations, where \(N\) is the sample size and \(k\) is the number of inputs.

Example: Dike Cost Sensitivity

Today

  1. Recap: Value of Information

  2. Local Sensitivity Analysis

  3. Global Sensitivity Analysis

  4. Example: Dike Cost Sensitivity

  5. Practical Considerations

  6. Summary

Which Uncertainties Drive Total Cost?

Consider the ICOW dike problem with uncertain inputs:

Parameter Uncertainty Type
Sea-level rise trajectory BRICK ensemble Scenario
Discount rate \(r\) 1–7% Deep uncertainty
GEV surge parameters \((\mu, \sigma, \xi)\) Estimation uncertainty Statistical
Economic growth rate 0–3% Deep uncertainty

GSA question: which of these contributes most to the variance in total cost?

VOI question: which of these, if resolved, would most change the optimal dike height?

GSA vs. VOI: They Can Disagree

High \(S_i\), low EVPXI:

The discount rate may dominate the total cost variance, but if the optimal dike height is similar across discount rates, knowing \(r\) doesn’t help the decision.

Low \(S_i\), high EVPXI:

A parameter might contribute little to total cost variance, but its effect is concentrated near the decision boundary, flipping which action is optimal.

Practical Considerations

Today

  1. Recap: Value of Information

  2. Local Sensitivity Analysis

  3. Global Sensitivity Analysis

  4. Example: Dike Cost Sensitivity

  5. Practical Considerations

  6. Summary

When to Use Each Tool

Use GSA when:

  • You want to understand model behavior
  • You want to simplify a model (fix insensitive parameters)
  • You want to prioritize research to reduce output uncertainty

Use VOI when:

  • You want to prioritize research to improve decisions
  • You want to know if gathering more information is worth the cost
  • You have a specific decision to make

GSA in Practice: Tips

  • Sample size matters. Sobol indices need large \(N\) to converge (often \(N > 1000\)). Start small, check convergence.
  • Input distributions matter. The indices depend on what ranges and distributions you assume. Be deliberate about these choices.
  • Screening first. For models with many inputs (\(k > 10\)), use a cheaper method (e.g., Morris/elementary effects) to screen out unimportant parameters, then run Sobol on the survivors.
  • Correlated inputs violate the independence assumption in standard Sobol decomposition. There are extensions, but they’re more complex.

Connection to the Course Arc

  • Week 6: We optimized under specific scenarios (one at a time)
  • This week: We ask which uncertainties matter — for the output (GSA) and for the decision (VOI)
  • Week 8 (Robustness): What if we can’t even specify input distributions? Deep uncertainty methods don’t require probabilities.
  • Week 10 (Scenario Discovery): Finding regions of the input space where the decision fails — a targeted form of sensitivity analysis

Summary

Today

  1. Recap: Value of Information

  2. Local Sensitivity Analysis

  3. Global Sensitivity Analysis

  4. Example: Dike Cost Sensitivity

  5. Practical Considerations

  6. Summary

Two Complementary Tools

Sensitivity Analysis Value of Information
Question What affects the output? What affects the decision?
Measures Variance decomposition Expected improvement in objective
Requires Input distributions Input distributions + decision model
Use for Understanding models, simplification Research prioritization, monitoring design

Use both. GSA tells you what drives uncertainty. VOI tells you what’s worth learning.

Friday: Lab 6

You’ll compute EVPI and EVPXI on the ICOW dike problem.

This is the VOI side of what we discussed today — quantifying the decision value of resolving specific uncertainties.

References