Robustness


Lecture

Mon., Mar. 18

Refresher

Today

  1. Refresher

  2. Robustness motivation

  3. Robustness metrics

  4. Critiques and alternative perspectives

  5. Logistics

Course organization

  1. Module 1: background knowledge (climate risk, etc.)
  2. Module 2: quantitative tools for decision-making under uncertainty
  3. Module 3: getting more philosophical and practical

Notation

Why do we use computers?

Your turn

Robustness motivation

Today

  1. Refresher

  2. Robustness motivation

  3. Robustness metrics

  4. Critiques and alternative perspectives

  5. Logistics

The problem

Abraham et al. (2020) show that water utilities systematically over-estimate future demand. Using a single, certain, forecast of future water demand might motivate over-building infrastructure.

Robustness

We want to make choices / design infrastructure that are robust to errors in demand forecasts.

Definition

The insensitivity of system design to errors, random or otherwise, in the estimates of those parameters affecting design choice (Matalas & Fiering, 1977)

Mathematical definitions differ dramatically, however (@ Herman et al., 2015)! Today we will discuss some overlapping perspectives and ideas about robustness.

Bottom-up analysis

  1. Top-down, certain: experts develop a “best” forecast of future conditions, then choose a design that is optimal under that forecast
  2. Top-down, uncertain: experts assign likelihoods to uncertain states of the world, then choose a design that optimizes expected performance (Herman et al., 2015)
  3. Bottom-up: first explore to identify SOWs a solution is vulnerable to, then assess likelihood (more Wednesday!)

Robustness metrics

Today

  1. Refresher

  2. Robustness motivation

  3. Robustness metrics

  4. Critiques and alternative perspectives

  5. Logistics

Taxonomy

Herman et al. (2015)

Regret

Regret measures how sorry you are with your choice. There are two main definitions (Herman et al., 2015):

  1. Deviation of a single solution in the real world or a simulated SOW from its baseline (expected) performance
  2. Difference between the performance of a solution in the real world or a simulated SOW and the best possible performance in that SOW

Satisficing

Satisficing measures whether solutions achieve specific minimum requirements, condensing performance into a binary “satisfactory” or “unsatisfactory”.

  1. With many SOWs, many studies use a domain criterion: over what fraction of SOWs does a solution satisfy a performance threshold (Herman et al., 2015)?
  2. Note: this is equivalent to asking what is the probability that a solution satisfies a performance threshold, although many people who calculate robustness metrics are allergic to the word “probability”
  3. More complex satisficing criteria: see McPhail et al. (2019).

Critiques and alternative perspectives

Today

  1. Refresher

  2. Robustness motivation

  3. Robustness metrics

  4. Critiques and alternative perspectives

  5. Logistics

Parameters?

The robustness metrics we’ve seen are defined in terms of parameters: we have a model with parameters, and we define SOWs as different values of those parameters.

Is this a good way to quantify our conceptual ideas about robustness?

Combinations of uncertainties

In practice, we often have a combination of parametric uncertainties and “model structure” uncertainties (Doss-Gollin & Keller, 2023). And not all SOWs are equally likely!

Climate scenario uncertainties are “deep” (more next week!), but it would be a mistake to say we don’t know anything and all futures are equally likely (Hausfather & Peters, 2020)

Doss-Gollin & Keller (2023)

Alternative perspective

  1. Using “prior beliefs” assign likelihoods to different SOWs
  2. Use quantitative toolkit (optimization, sensitivity analysis, etc.)
  3. Vary prior belefs: a solution that is robust to different probability distributions rather than to different parameter values.

Logistics

Today

  1. Refresher

  2. Robustness motivation

  3. Robustness metrics

  4. Critiques and alternative perspectives

  5. Logistics

Exam 2

  • Exam 2 was scheduled for 3/22
  • It will be postponed to 3/29
  • 3/22: review session
  • Lake problem lab will be postponed to 4/5
  • I will have Exam 1 graded by 3/22 so you know where you stand!

Final project

In your final project, you will add something that’s misisng to the house-elevation problem. This might be:

  • More complexity / realisim for a model component (e.g., depth-damage function, nonstationary storm surge probability, household financial constraints, etc.)
  • A new model component (e.g., a better decision alternative, etc.)
  • Applying a decision tool from class (e.g., sequential decision analysis, robustness checks, etc.)

Timeline coming soon – don’t worry about Canvas for now.

References

Abraham, S., Diringer, S., & Cooley, H. (2020). An Assessment of Urban Water Demand Forecasts in California. Oakland, California: Pacific Institute. Retrieved from https://pacinst.org/wp-content/uploads/2020/08/Pacific-Institute-Assessment-Urban-Water-Demand-Forecasts-in-CA-Aug-2020.pdf
Doss-Gollin, J., & Keller, K. (2023). A subjective Bayesian framework for synthesizing deep uncertainties in climate risk management. Earth’s Future, 11(1). https://doi.org/10.1029/2022EF003044
Hausfather, Z., & Peters, G. P. (2020). Emissions – the ’business as usual’ story is misleading. Nature, 577(7792, 7792), 618–620. https://doi.org/10.1038/d41586-020-00177-3
Herman, J. D., Reed, P. M., Zeff, H. B., & Characklis, G. W. (2015). How should robustness be defined for water systems planning under change? Journal of Water Resources Planning and Management, 141(10), 04015012. https://doi.org/10.1061/(asce)wr.1943-5452.0000509
Matalas, N. C., & Fiering, M. B. (1977). 6. Water-Resource Systems Planning. In Climate, Climatic Change, and Water Supply (pp. 99–110). Washington, DC: The National Academies Press. Retrieved from https://www.nap.edu/read/185/chapter/11
McPhail, C., Maier, H. R., Kwakkel, J. H., Giuliani, M., Castelletti, A., & Westra, S. (2019). Robustness metrics: How are they calculated, when should they be used and why do they give different results? Earth’s Future, 169–191. https://doi.org/10.1002/2017ef000649