Monday, July 3, 2017

Which session to submit to? Hydrologic Uncertainty at AGU Fall Meeting 2017



Uncertainty is a multi-faceted topic. To help in choosing a session to submit to at the AGU Fall Meeting 2017, we've put together a shortlist of sessions related to characterizing uncertainty, living with uncertainty, and reducing uncertainty.

Comments and questions about specific sessions are welcome, including any we may have missed.

The early abstract submission deadline is 26th July 2017.

Characterizing uncertainty


Uncertainty analysis (UA) and Sensitivity Analysis (SA) methods
Parameter estimation and data assimilation

Ensemble methods
Stochastic modelling

Living with uncertainty


Risk management and Robust Decision Making


Uncertainty in Decision Support Systems


Reducing uncertainty


Data-based, machine learning modelling approaches
Advancing process-based modelling

Friday, June 9, 2017

Uncertainty about Uncertainty


I like to point to Keith Beven's (1987) conference paper, titled ‘Towards a new paradigm in hydrology’, as a place for new hydrologists to start to develop an understanding of uncertainty in the hydrological sciences[1]. In that paper Keith argued that “little to no success” had been made against the fundamental problem of developing theories about how small-scale complexities lead to large-scale behavior in hydrological systems[2].
The paper discussed what hydrologists might do about this situation, and in the last two paragraphs Keith made two predictions:
·      First, that hydrology of the future will require a theoretical framework that is inherently stochastic to deal with the “value of imperfect observations and qualitative knowledge in reducing predictive uncertainty.” 
·      Second, that hydrologists would not actually develop this type of theoretical framework, but instead would capitalize on the (then) emerging power of desktop computing to approximate uncertainty; leading to results that “may not be pretty,” but which are “realistic in reflecting both our understanding and lack of knowledge of hydrological systems.”
Of course, both of his predictions were essentially correct. In 2007 Jeff McDonnell wrote “to make continued progress in watershed hydrology … we need to … explore the set of organizing principles that might underlie heterogeneity and complexity” (McDonnell et al., 2007). Jeff went on to describe several possibilities for ‘moving beyond’ heterogeneity and process complexity, but the point is nevertheless clear: the problem remains unsolved. Similarly, Keith wrote last year that “our perceptual model of uncertainty is now much more sophisticated … but this has not resulted in analogous progress in uncertainty quantification” (Beven, 2016).
The situation in hydrology right now is that we understand that macro-scale behaviors of watersheds are governed by small-scale heterogeneities, but we don’t have a theory about how this works, and we don’t have any fundamental theory that allows us to (reliably) quantify predictive uncertainties related to these processes.
Instead, what we have are ad hoc strategies for obtaining numbers that seem like they might be related with uncertainty. One example of this is the recent proliferation of multi-parameterization modeling systems that allow the user to choose between a variety of options for different flux parameterizations. An example is the Structure for Unifying Multiple Modeling Alternatives; Clark et al. (2015) wrote that “[SUMMA] provides capabilities to evaluate different representations of spatial heterogeneity and different flux parameterizations, and therefore tackle the fundamental modeling challenge of simulating the fluxes of water and energy over a hierarchy of spatial scales.” It’s unclear why predicting with a variety of different parameterizations of scale-dependent processes allows us to ‘tackle’ scale-related challenges: we still lack a fundamental theory of hydrologic scaling, and making predictions with several different parameterizations is not in any way reflective of the actual nature of our lack of knowledge about the principles and behavior of hydrologic systems.
Other methods that we often use for uncertainty quantification suffer from similar problems. Bayesian methods allow us to do precisely one thing: inter-compare or average several different competing models. Bayesian methods, and indeed any methods for model inter-comparison or model averaging, are fundamentally incapable of helping us to understand the difference between our family of models (i.e., those models that are assigned finite probability by the prior) and the real system. Gelman & Shalizi (2013) give a philosophical treatment of this problem that is worth reading.
It has been suggested that we might use empirical methods to develop probability distributions over different components of predictive imprecision (e.g., Montanari and Koutsoyiannis, 2012), but this type of approach assumes stationarity not only in those aspects of the hydrological system that are captured by the model, but also stationarity in the relationship between model error and those parts of the hydrological system that are not captured by the model.
The point is that uncertainty is tautologically inestimable, and so it is really no surprise that Keith’s second prediction came true – there was never any real possibility to develop a rigorous theoretical basis for understanding uncertainty, scale-dependent or otherwise. More than that, the methods that we have come up with to approximate uncertainty don’t actually do that at all – at least not in any way that is fundamentally or theoretically reliable. I propose the following challenge: provide a theorem that proves a bounded, asymptotic, or even consistent relationship between any quantitative estimator and real-world uncertainty under evaluable assumptions. I offer the standard wager for scientific controversy[3]: a bottle of Yamazaki 12 year, or comparable.
Until we have such a theorem, I propose that it is not useful to talk about uncertainty quantification – approximate or otherwise, – because none of our estimators are related to real uncertainty in any systematic way.
Instead, I predict a new paradigm change in hydrology. I suspect that within the next 30 years, the conversation in hydrology will be about information rather than uncertainty. The reason for this is that while it is impossible to estimate uncertainty (even approximately), it is possible to obtain at least bounded estimates of information measures (Nearing and Gupta, 2017). The main project of science seems to be about comparing the information contained in observation data with the information provided by a hypothesis-driven model. Similarly, the problem of scaling under heterogeneity seems to be fundamentally about cross-scale information transfers, rather than about uncertainty. Given recent work in basic physics (e.g., Cao et al., 2017), I suspect that it will not take three more decades for us to discover real scaling theories under this type of perspective.
Of course, we will always want to know the reliability of our model predictions, and for this reason the concept of uncertainty will never go away completely. I propose, however, that the tractable and meaningful challenge is to understand the actual predictive precision implied by our hydrological theory and hypotheses. At present, we do not do this. Current practice is to build models that are over-precise, and then append ‘uncertainty’ distributions to their predictions. This method of dealing with a lack of complete knowledge stems fundamentally from our Newtonian heritage. Essentially all process-based hydrological models are expressed as PDEs, which must admit solution in order to make a prediction.
There are two problems with building dynamical models as PDEs. First, such models make ontological predictions (predictions about what will happen), whereas what we actually want are epistemological predictions (predictions about what we can know about what will happen). The uncertainty probabilities that we append to our models are the latter, and they are what we actually need for both hypothesis testing and decision support. But these probabilities are not the product of actually solving our model equations. Even if our model is a stochastic PDE, the random walk component is simply an ad hoc appendage on the drift function. Sampling model inputs or different model structures does not actually tell us anything about our lack of knowledge associated with any of those model structures. Models built as PDEs simply do not solve for anything that represents what we can know from our physical theory and hypotheses.
The second problem is that a PDE only provides a prediction if it can be solved. This requires that we prescribe values (or distributions) over all parameters contained in our hypothetical parameterizations, some of which are impossible to measure and otherwise difficult to estimate. It would be exciting to have a method for constructing models that allows us to assign values to only those parameters that we feel we actually have some information about.
But there is, in principle (although I have no example of such), a way to build this type of model. Instead of expressing conservation principles using differential equations, we could express them as symmetry constraints on probability distributions. To do this, we might specify a Bayesian network such that each node is a random variable representing a particular scale-dependent quantity at a particular time and location within the modeled system; conservation laws could then be used to effectively rule out large portions of the joint space of values over these random variables. By imposing conservation laws and other physical principles as constraints on joint probability distributions, our models would fundamentally solve for what we can know about the future or unobserved behavior of a dynamical system conditional on whatever information (theories, hypotheses, data) are used to build the model. In principle, anything that we do know, or wish to hypothesize, could be imposed as a constraint on the joint distribution over a family of random variables representing different aspects of system behavior. 
Although such a strategy would not allow us to measure epistemic uncertainty (uncertainty is always and still inestimable), at least it would allow us to know what information we actually have about the behavior of hydrologic systems. This would be a very different way of approaching model building than appending uncertainty distributions to PDE solutions, and would allow us to actually quantify the information content of our scientific hypotheses and models.
So perhaps my predictions about paradigm change will not come to pass. I am, after all, essentially arguing against two of the most fundamental concepts in our science: that we should not use Bayesian methods to evaluate models, and that we should not use differential equations to build models. I do suspect that I am right about both of these things, in the sense that our science (indeed, any science of complex systems) would accelerate by abandoning these ideas in favor of information-centric philosophies and methods, but perhaps it will take longer than 30 years to demonstrate that such a substantial change is necessary. 

------
Beven, K. 'Towards a new paradigm in hydrology'. Water for the Future: Hydrology in Perspective. , Rome, Italy: IAHS Publication.
Beven, K. J. (2016) 'Facets of uncertainty: Epistemic error, non-stationarity, likelihood, hypothesis testing, and communication', Hydrological Sciences Journal, (9), pp. 1652-1665.
Cao, C., Carroll, S. M. and Michalakis, S. (2017) 'Space from Hilbert space: Recovering geometry from bulk entanglement', Physical Review D, 95(2), pp. 024031.
Clark, M. P., Nijssen, B., Lundquist, J. D., Kavetski, D., Rupp, D. E., Woods, R. A., Freer, J. E., Gutmann, E. D., Wood, A. W. and Gochis, D. J. (2015) 'A unified approach for processbased hydrologic modeling: 2. Model implementation and case studies', Water Resources Research, 51(4), pp. 2515-2542.
Dooge, J. C. I. (1986) 'Looking for hydrologic laws', Water Resources Research, 22(9S), pp. 46S-58S.
Gelman, A. and Shalizi, C. R. (2013) 'Philosophy and the practice of Bayesian statistics', British Journal of Mathematical and Statistical Psychology, 66(1), pp. 8-38.
McDonnell, J. J., Sivapalan, M., Vaché, K., Dunn, S., Grant, G., Haggerty, R., Hinz, C., Hooper, R., Kirchner, J. and Roderick, M. L. (2007) 'Moving beyond heterogeneity and process complexity: A new vision for watershed hydrology', Water Resources Research, 43(7).
Montanari, A. and Koutsoyiannis, D. (2012) 'A blueprint for process-based modeling of uncertain hydrological systems', Water Resources Research, 48(9), pp. n/a-n/a.
Nearing, G. S. and Gupta, H. V. (2017) 'Information vs. uncertainty as a foundation for a science of environmental modeling', https://arxiv.org/abs/1704.07512.


[1]It’s the kind of paper that can be enjoyed with a beer.
[2]Dooge (1986) gave a somewhat more technical discussion of this same problem.
[3]e.g., https://www.quantamagazine.org/supersymmetry-bet-settled-with-cognac-20160822
This Technical Committee on Hydrologic Uncertainty seeks to improve how uncertainty is evaluated and measured by scientists in the Hydrology section of AGU, and to improve how uncertainty is communicated within and beyond the hydrology section. The technical committee maintains a log site at http://aguhu.blogspot.com/ for communication and evolution of scientific sessions. Hydrologists use uncertainty concepts and measures in many ways, from testing theories against data to providing regulators with defensible quantification of uncertainties associated with sometimes controversial environmental problems (e.g., sustainability, integrated water resources management, climate impacts, carbon sequestration, hydrofracking, and waste disposition). Issues of interest include how uncertainties (in data and model structures, parameters, and driving forces) are represented, evaluated, and reduced; uncertainty quantification in risk analysis and decision support; and how legal structures do and do not integrate the reality of uncertainty. Of interest are probabilistic and non-probabilistic metrics used to evaluate model responses, judge models against data, rank alternative models and test hypotheses; sensitivity analyses used to unravel sources of uncertainty; data collection strategies optimized for uncertainty reduction; and novel ideas not yet considered. As uncertainty is a cross-cutting issue, the Hydrology Section Uncertainty Technical Committee coordinates with other sections of AGU to include the notion of uncertainty in their research fields. This interdisciplinary and quantitative focus provides fruitful opportunities for conducting collaborative research with broader funding opportunities. It is one of the critical missions of the committee to foster interdisciplinary research for uncertainty analysis and to use uncertainty analysis as a vital tool for advanced understanding and bridging multiple disciplines.

Wednesday, June 7, 2017

Welcome to the blog site developed by Dr. Vesselinov at the Los Alamos National Laboratory. The Hydrology Uncertainty Committee (http://hydrology.agu.org/committees/) of the American Geophysical Union (AGU) would like to use this blog site for information exchange, group discussion, and whatever communication with and outside of the hydrology uncertainty committee. Should you have any questions, please contact Ming Ye (mye@fsu.edu; 850-644-4587), who is serving as the Hydrology Committee Chair. Thanks.