Plan
Comptes Rendus

On the (im)possible validation of hydrogeological models
[Sur la validation (im)possible des modèles hydrogéologiques]
Comptes Rendus. Géoscience, Volume 355 (2023) no. S1, pp. 337-345.

Résumés

This paper revisits the controversy on the validation of hydrogeological models, 30 years after it broke out with the publications by [Konikow and Bredehoeft, 1992a] and [de Marsily et al., 1992]. In that debate, [Konikow and Bredehoeft, 1992a] argued that the word “valid” was misleading to the public and should not be used with respect to models. [de Marsily et al., 1992] answered that while the bases of hydrogeological models (conservation of mass and Darcy’s law) were uncontestable and unconditionally valid, specific validation exercises were dearly needed to evaluate the parameters and the geometry of these models (confronting the models with data they had not seen during the calibration phase). By updating and extending the literature review, we reanalyze this debate and the arguments presented and conclude by proposing an extension of de Marsily’s position, which underlines the necessity to look at validation from two distinct viewpoints, i.e. the point of view of the model’s explanatory power (theoretical content) and the point of view of its predictive power. The explanatory and predictive dimensions of model validation are to be considered separately.

Cet article revisite la controverse sur la validation des modèles hydrogéologiques, 30 ans après qu’elle ait éclaté avec les publications de [Konikow and Bredehoeft, 1992a] et [de Marsily et al., 1992]. Dans ce débat, [Konikow and Bredehoeft, 1992a] faisaient valoir que le mot «  valide  » était trompeur pour le public et qu’il ne devrait pas être utilisé pour les modèles. [de Marsily et al., 1992] ont répondu que si les bases des modèles hydrogéologiques (conservation de la masse et loi de Darcy) étaient incontestables et inconditionnellement valides, des exercices de validation spécifiques étaient absolument nécessaires pour évaluer les paramètres et la géométrie de ces modèles (en confrontant les modèles à des données qu’ils n’avaient pas vues pendant la phase de calibration). En actualisant et en étendant la revue de la littérature, nous réanalysons ce débat et les arguments présentés et concluons en proposant une extension de la position de de Marsily, qui souligne la nécessité de considérer la validation de deux points de vue distincts, à savoir le pouvoir explicatif du modèle (contenu théorique) et son pouvoir prédictif. Les dimensions explicatives et prédictives de la validation d’un modèle doivent être considérées séparément.

Métadonnées
Reçu le :
Accepté le :
Première publication :
Publié le :
DOI : 10.5802/crgeos.142
Keywords: Hydrogeological model, Model validation, Corroboration, Falsifiability, Ghislain de Marsily
Mot clés : Modèles hydrogéologiques, Validation des modèles, Corroboration, Réfutabilité, Ghislain de Marsily
Vazken Andréassian 1

1 Université Paris-Saclay, INRAE, HYCAR Research Unit, Antony, France
Licence : CC-BY 4.0
Droits d'auteur : Les auteurs conservent leurs droits
@article{CRGEOS_2023__355_S1_337_0,
     author = {Vazken Andr\'eassian},
     title = {On the (im)possible validation of hydrogeological models},
     journal = {Comptes Rendus. G\'eoscience},
     pages = {337--345},
     publisher = {Acad\'emie des sciences, Paris},
     volume = {355},
     number = {S1},
     year = {2023},
     doi = {10.5802/crgeos.142},
     language = {en},
}
TY  - JOUR
AU  - Vazken Andréassian
TI  - On the (im)possible validation of hydrogeological models
JO  - Comptes Rendus. Géoscience
PY  - 2023
SP  - 337
EP  - 345
VL  - 355
IS  - S1
PB  - Académie des sciences, Paris
DO  - 10.5802/crgeos.142
LA  - en
ID  - CRGEOS_2023__355_S1_337_0
ER  - 
%0 Journal Article
%A Vazken Andréassian
%T On the (im)possible validation of hydrogeological models
%J Comptes Rendus. Géoscience
%D 2023
%P 337-345
%V 355
%N S1
%I Académie des sciences, Paris
%R 10.5802/crgeos.142
%G en
%F CRGEOS_2023__355_S1_337_0
Vazken Andréassian. On the (im)possible validation of hydrogeological models. Comptes Rendus. Géoscience, Volume 355 (2023) no. S1, pp. 337-345. doi : 10.5802/crgeos.142. https://comptes-rendus.academie-sciences.fr/geoscience/articles/10.5802/crgeos.142/

Version originale du texte intégral (Proposez une traduction )

1. Introduction

While validation exercises cannot ensure perfection, they help hydrogeologists achieve their level best by increasing their confidence in the model used: This is how one could summarize the position of de Marsily et al. [1992] in the model validation debate that set them in opposition to Konikow and Bredehoeft [1992a]. The argumentation of de Marsily et al. [1992] was quite straightforward: They insisted that model validation was an essential exercise for hydrogeology, and that it was excessive to call upon the Popperian vision of falsifiability [Popper 1959] to renounce testing exhaustively hydrogeological models: “Groundwater flow models rely essentially on two concepts: (i) mass balance, (ii) Darcy’s law. The former is a principle, not a theory. No one is going to seriously argue that the mass conservation principle may one day be invalidated. […] Darcy’s law is not a theory; it is an empirical observation, which is applied in a huge number of cases (although it can be in error in a few very special cases, and even so, the departure from the linear Darcy law will be of little significance in most applications)” [de Marsily et al. 1992, p. 367].

Beyond the principles that they considered pointless to contest, de Marsily et al. [1992] insisted that both the parameters and the geometry (the structure) of a hydrogeological model remain uncertain and this is precisely why the so-called validation exercises were needed: to either refine the model progressively, or to confirm the robustness of past parametric and structural choices. They argued that the validation exercises were meaningful and that they necessitated using the model in a predictive mode and confronting it with data it had not seen during the calibration phase. This process “increases the confidence” in the model in question, and even if certainty and perfection remain out of reach, this is already a worthy result.

In their response, Konikow and Bredehoeft [1992b] wrote that “using the word ‘valid’ with respect to models misleads the public” and makes hydrogeologists “look foolish to our scientific colleagues”. However, they agreed that the exercise they called “postaudit” (and which consists in revisiting past predictions after a few years) was useful.

As in all controversies, the vocabulary used is not always well defined. “Valid” comes from the Latin “validus”, meaning “strong, healthy”. The concept of validity has a precise definition in logic, that of a univocal link between the premises and the conclusion of an argument (i.e., if the premises are true, then the conclusion has to be true). It is a well-established concept in law, where a norm is valid if conditions of form (the procedure is respected) and substance (the superior rules of law are respected) are ensured. It is not precisely defined in hydrogeology, where neither Konikow and Bredehoeft [1992a] nor de Marsily et al. [1992] provided a clear definition (we would not consider Konikow and Bredehoeft’s definition of validation as “a process that can guarantee that a model is a correct representation of the physical world” to be precise, because the term “correct” is as vague as the term “valid” was in the first place).

Thirty years have passed since the publication of the articles by de Marsily et al. [1992] and Konikow and Bredehoeft [1992a], and we posit that it is time to propose a critical appraisal of this debate, in the light of more recent contributions on the model validation issue.

2. Further contributions from hydrogeologists

Several hydrogeologists brought a further contribution to the debate: Carrera et al. [1993] started by stressing that to them an accurate characterization of geological media was “absurdly utopic”, adding that due to the numerous unknowns and uncertainties in both physical processes and underground media properties, validation was a “rather elusive concept, probably more controlled by the modeler’s background and views of reality than by actual facts”. They insisted on the fact that the qualitative nature of many observations necessarily results in a somewhat subjective conceptualization by the modeler, resulting in several equally likely alternative models. At that point, it was essential to agree on an objective model selection process, and the authors proposed a selection process involving (i) an analysis of model residuals, (ii) an analysis of model parameters (with the aim of having “reasonable” values), and (iii) the computation of theoretical measures of model validity. Acknowledging the difficulty of linking model parameter values with field measurements (because of the scale issue), they insisted on aiming at parameter stability and added that parsimony was a good means to obtain robust parameters. In conclusion, they underlined that different people perceive the validation process differently, and suggested that models be seen as simple theories about the behavior of the natural systems, to reduce the “drama and controversy often associated with the concept of validation”.

Gorokhovski and Nute [1996] also contributed to the debate: considering the “Popperian” validation of hydrogeological models impossible, they proposed to focus on improving the evaluation of modelling uncertainties using full models and surrogate models, in what they name a “two-level modelling approach”.

The vision of Doherty [2011] is also worth of mention in the frame of this debate: in an editorial of Groundwater, he discussed the relative merits of complex (“picture-perfect”) and simple (“abstract”) models, which should both have a role to play for the sake of extracting as much information as possible from historical data. He added that the abstract models are too-often discarded “just because the model does not ‘look like’ what we imagine reality to look like”, while “a model deserves criticism only when it fails to achieve the only thing that it has a right to claim—quantification of uncertainty and maximum reduction of uncertainty through optimal processing of environmental data”.

3. The model validation debate on the other side of the hydrological fence (among the “surface” hydrologists)

We all agree that there is only one unique water cycle, and that the border between hydrogeology and surface hydrology is only cultural, mostly an inheritance of the too-narrow disciplinary teaching of the 20th century. There are however different traditions in surface hydrology (where models focus on reproducing the precipitation-streamflow relationship without mentioning groundwater levels most of the time), and hydrogeology (where the reproduction of piezometric levels is of primary importance and the surface processes are only considered under a “recharge” perspective). Let us now see how the issue of model validation has been dealt with by the “surface” hydrologists.

The loudest voice on this topic has unarguably been that of Vit Klemeš, former president of the International Association of Hydrological Sciences. He entered the debating arena with a paper published few years before the article by de Marsily et al. [1992]. Klemeš [1986] defended the generalization of what he called split sample tests1 (SSTs), and proposed a progressive four-level calibration–validation testing scheme to assess hydrological models. Klemeš’s SST focuses on model transposability in time and space, with increasing difficulties presented to the model: (i) the elementary SST is based on calibrating and validating the model on two independent periods, (ii) the proxy-basin SST is based on transferring parameters between neighboring catchments, (iii) the differential SST is based on calibrating and validating the model on two independent and contrasting (dry/wet or cold/warm) periods, and (iv) the proxy-basin differential SST is based on transferring parameters between neighboring catchments on contrasting periods. Klemeš’s hope was that a wider adoption of SST practices could lead to reducing “the most glaring abuses of simulation models” and in promoting realistic assessments among modelers by avoiding “exaggerated claims regarding model capabilities.2 ” All this is quite similar to de Marsily’s objective: “increasing confidence”.

A few years after the paper by de Marsily et al. [1992], Refsgaard and Knudsen [1996] published a paper entitled “Operational validation and intercomparison of different types of hydrological models.” They applied Klemeš’s four-level SST scheme to three models of increasing complexity, as their aim was to study the comparative robustness of different models. In this paper, they provide their own definition of “model validation”, which appears to be a nice synthesis of the opinions of de Marsily et al. [1992] and Konikow and Bredehoeft [1992a]: “Model validation is here defined as the process of demonstrating that a given site-specific model is capable of making accurate predictions for periods outside a calibration period. A model is said to be validated if its accuracy and predictive capability in the validation period have been proven to lie within acceptable limits or errors. It is important to notice that the term model validation refers to a site-specific validation of a model. This must not be confused with a more general validation of a generalized modelling system which, in principle, will never be possible” (p. 2190). The same group of authors developed their vision on the subject in subsequent papers [Refsgaard and Henriksen 2004; Henriksen et al. 2003].

Over the past three decades, Professor Keith Beven has actively discussed the model validation issue. He, however, advocates and promotes a rejectionist approach, where “the question is not really validation but rather on what basis should a model run survive invalidation” [Beven, personal communication]. In a recent synthesis [Beven 2019a], he defends the idea that “a simulation model should be shown to be fit-for-purpose, corroborated against some kind of observation or judgment, even if there are few rules about precisely what constitutes ‘fit’ and ‘purpose’, such that its use can be justified.” He proposes for model evaluation an approach called “limits of acceptability”, considering that there will be “a gradation of acceptability from the ‘best’ models that can be found, to those that are clearly not acceptable as simulators of the system of interest: in this context, the equifinality concept is intrinsically linked to model calibration and validation. The equifinality thesis suggests that there will be no single model representation of an environmental system, but rather an evolving ensemble of models that are considered acceptable in the sense of being useful in prediction as new information becomes available” [Beven 2019b].

Let us mention here also our own past contribution to this debate [Andréassian et al. 2009]: While avoiding the terminological debate on the possible or impossible validation of hydrological models, we did argue that it was important to test models as exhaustively and vigorously as possible, with truly demanding tests that we proposed to call crash tests: Since the car industry can learn by destroying on purpose an exemplary of their production, we hydrologists should not be ashamed of taking our models to their limits and even a little beyond. We also underlined that the validation of a given model structure would require that tests be conducted on large sets of catchments, as large and varied as possible [see also on this topic Andréassian et al. 2006; Gupta et al. 2014]. A few years later, Biondi et al. [2012] proposed two “code of practices”, one for the validation of the performances of hydrological models, and another for what they call the “scientific validation” of the model. They insist on discussing model limitations “with the same detail that is dedicated to model strengths”, taking the example (Table 1) of the well-known SWOT analysis [on this issue of valuing the evaluation of model failures, see also our discussion in Andréassian et al. 2010].

Table 1.

Schematic representation of a SWOT analysis for models [modified from Biondi et al. 2012]

Factors related to the model’s predictive power
Strengths Weaknesses
Factors related to the model’s explanatory power Opportunities Highlight model strengths and related opportunities Highlight model weaknesses and how they can be mitigated
Risks Highlight how model strengths allow avoiding risks Highlight which risks are caused by model weaknesses

4. Other relevant contributions from the fields of science history, ecology, and statistics

Science historian Naomi Oreskes made several relevant contributions to the debate, with some explicit references to the dialogue between de Marsily et al. [1992] and Konikow and Bredehoeft [1992a]. In an initial paper, Oreskes et al. [1994] argued that models can only be evaluated in relative terms (i.e., a model should not be declared “good” but only “better” than an alternative one). They underlined that “the term validation does not necessarily denote an establishment of truth. Rather, it denotes the establishment of legitimacy typically given in terms of contracts, arguments and methods. A valid contract is one that has not been nullified by action or inaction. A valid argument is one that does not contain obvious errors of logic. By analogy, a model that does not contain known or detectable flaws and is internally consistent can be said to be valid.” Oreskes et al. [1994] explicitly referred to the position of de Marsily et al., which they commended as honest (but not easily marketable…), considering that it fell under the van Fraassen school of thought, i.e., constructive empiricism, where the goal of a scientific theory cannot be truth (unobtainable) but rather what van Fraassen names empirical adequacy.

In a second paper, Oreskes [1998] returned to the topic of validation in order to address issues related to models used to evaluate/support public policies: There, the semantic debate becomes overwhelming and Oreskes argued that “rather than talking about strategies for validation, we should be talking about means of evaluation”. A very interesting point in Oreske’s 1998] paper is a remark on the surprising reluctance of most scientists toward evaluation tests: “Most scientists are aware of the limitations of their models, yet this private understanding contrasts the public use of affirmative language to describe model results.”

In a third paper, Oreskes and Belitz [2001] first expressed semantic regrets—“the term ‘validation’ is an unfortunate one”—then underlined that the main problem lies with the extrapolation capacity of models: “Models may match available observations, yet still be conceptually flawed. Such models may work in the short run, but later fail. […] Rather than think of models as something to accept or reject […] it may be more useful to think of models as tools to be modified in response to knowledge gained through continued observation of the natural systems being represented.”

For the ecological sciences, Caswell [1976] discussed the model validation issue and argued that validation should be looked at differently depending on the purpose of the model: He considered it essential to distinguish between predictive models and theoretical models (i.e., models aimed at providing insight into how the system operates). Caswell deemed that theoretical models should be examined according to the Popperian sequence of “conjectures and refutations”, and proposed reserving the term “validation” for predictive models only (and to use the Popperian term of corroboration for theoretical models). He explained that the same model can be judged on both grounds, and can eventually be simultaneously declared predictively validated and theoretically refuted.

Two decades later, Power [1993] suggested a two-step approach to validation that would first check that candidate models are able to reproduce the statistical properties of the observations, in order to eliminate models with poor statistical properties. Only in a second phase would the models predictive properties be evaluated. Rykiel [1996] published an exhaustive review of model testing and validation practices in the field of ecological modeling, and his review shows that ecologists do not agree on the semantics or on the practices: In this way, they do not differ from the hydrogeologists! From an ecological research perspective, Rykiel [1996] considered that “the validation problem reflects ambiguity about how to certify the operational capability of a model versus how to test its theoretical content. The crux of the matter is deciding (1) if the model is acceptable for its intended use, i.e., whether the model mimics the real world well enough for its stated purpose, and, (2) how much confidence to place in inferences about the real system that are based on model results. The former is validation, the latter is scientific hypothesis testing. […] Models can indeed be validated as acceptable for pragmatic purposes, whereas theoretical validity is always provisional.” In conclusion, the author insisted that “validation is not a procedure for testing scientific theory or for certifying the ‘truth’ of current scientific understanding, nor is it a required activity of every modelling project. Validation means that a model is acceptable for its intended use because it meets specified performance requirements.”

More recently, the statistician Shmueli [2010] published a synthesis paper entitled “To explain or to predict?” where he discussed in much detail the distinction between explanatory and predictive models. This distinction seems to be central in the model validation debate; indeed, an explanatory model is to be validated qualitatively (and not necessarily quantitatively), while a predictive model is to be validated quantitatively (and could possibly be a “black-box” model, without any explicit explanatory capacity): “Predictive models are advantageous in terms of negative empiricism: a model either predicts accurately or it does not, and this can be observed. In contrast, explanatory models can never be confirmed and are harder to contradict.” Shmueli [2010] argued that misunderstandings arise from the frequent conflation between explanatory power and predictive power in science: “While explanatory power provides information about the strength of an underlying causal relationship, it does not imply its predictive power.” To conclude, the author suggested considering explanatory and predictive abilities as two dimensions: “explanatory power and predictive accuracy are different qualities and a model will possess some level of each.”

5. Discussion

5.1. Validation from a model uncertainty perspective

Over the past 30 years, uncertainty assessments have progressively become an inseparable part of modeling practice. The estimation of predictive uncertainty is seen as a kind of “quality insurance” [Refsgaard et al. 2005] and is as such considered good practice for any environmental modeling activity [Refsgaard 2007]. In groundwater modeling, the uncertainty topic has obviously been discussed for years [de Marsily 1978; Delhomme 1979] but no general agreement has yet been reached on how to adequately quantify it; see, for example Barnett et al. [2012] and Guillaume et al. [2016] for a review. Notwithstanding the present popularity of uncertainty assessment exercises, which are now becoming part of the common modeling evaluation practice, it is important to stress here that they can only be seen as a necessary but not sufficient means for model validation, because they only refer to the predictive dimension of models (cf. the aforementioned discussion of the 2010 Shmueli paper). And one can find in the history of science models that were “right but for the wrong reason” [e.g., the Ptolemaic planetary model and its famous epicycles, Klemeš 1986].

5.2. Validation from a sensitivity analysis perspective

Sensitivity analysis (SA) is as old as model construction, but the last three decades have seen a renewed interest in the use of SA techniques. Keeping a model slim is not enough to make it a good model, but it can definitely contribute to turn the model validation process more efficient. According to Saltelli et al. [2000], SA can help investigate “whether a model resembles the system or processes under study; the factors that most contribute to the output variability and that require additional research to strengthen the knowledge base; the model parameters (or parts of the model itself) that are insignificant, and that can be eliminated from the final model; if there is some region in the space of input factors for which the model variation is maximum; the optimal regions within the space of the factors for use in a subsequent calibration study; if and which (group of) factors interact with each other.”

5.3. Validation from a data availability perspective

Over the past 30 years, the type and amount of data available for model validation has evolved, and this has had an impact on the “feasibility” of validation exercises. On the positive side, distributed data from satellites are now available, sometimes at high frequency. New measurements have appeared, allowing evaluating models at a regional scale rather than at a point scale: one can mention here NASA’s Gravity Recovery and Climate Experiment (GRACE), which provides since 2002 a quantitative measurement of terrestrial water storage changes, allowing the estimation of groundwater storage changes [Tapley et al. 2004]. Other satellite products offer information on actual evaporation and snow extent, and while the quality of satellite precipitation estimates remains rather modest, it has improved too. Water quality and water temperature sensors are also increasingly available, so that in many regions of the world the possibilities for quantitative validation of hydrogeological predictions have increased. Of course, there is another side to every coin… and one should also mention that in many areas of the world, the density of ground stations (measuring either streamflow, piezometric level or precipitation) has actually decreased…

5.4. Validation or evaluation?

Among the criticisms made to the 1992 model validation debate, one is full of good sense: since there is so much controversy around the word “validation”, let us choose another softer one and give it a precise definition. This is the point of view developed by Oreskes [1998]: “rather than talking about strategies for validation, we should be talking about means of evaluation. That is not to say that language alone will solve our problems or that the problems of model evaluation are primarily linguistic. The uncertainties inherent in large, complex models will not go away simply because we change the way we talk about them. But this is precisely the point: calling a model validated does not make it valid.” This is certainly right, but on the other side we must acknowledge that it is extremely complicated to fight language habits! For example, the French language is full of undesirable anglicisms, which the Académie Française is fighting against… with limited success. We have been able to introduce “ordinateur” to replace “computer”, but we keep using the English language “sport” instead of its old French equivalent “desport”. If we decide to wait for our colleagues to accept and adopt our naming conventions… we may need a lot of patience.

6. Conclusion

Thirty years after the publication of the article by de Marsily et al. [1992], our literature review has allowed us to shed some new light on the model validation debate. For de Marsily et al. [1992], model validation exercises were meant to increase the confidence that a hydrogeologist would have in his/her model. This notion of confidence was multifactorial, as a model was to hold both explanatory power (cf. the reference to Darcy’s law and the principle of mass conservation) and predictive power (cf. the reference to success obtained in tests on an independent period).

This distinction between the predictive and explanatory dimensions of validation [underlined among others by Caswell 1976; Beven 2001 and Shmueli 2010] is essential: With regard to our model validation debate, it implies that model validation can have two dimensions (hence the possibility of misunderstandings for those who did not realize it in the first place). It also implies the possibility of searching for compromises between these two dimensions: A “strong” predictive model could be preferred to a “weak” explanatory model, and vice versa. Obviously, validation becomes a multi-objective endeavor, and as such, it will require hydrogeologists to look for compromises (which may remain a matter of debate among them).

To conclude this conclusion, we would like to propose our own definition of model validation by extending that of de Marsily; and while this new definition is unavoidably shifted towards the way surface hydrologists look at models, we do believe that it retains enough generality to be of common interest to the hydrological and hydrogeological sciences:

  1. The validation of models is possible and necessary;
  2. When judging the validity of a model, one needs to keep in mind that a model remains an abstraction and a simplification;
  3. Judging the validity of a hydrological model requires one to consider the model’s objectives as well as its space and time scale;
  4. Validity can be considered from the point of view of the model’s explanatory power (theoretical content) and/or from the point of view of its predictive power. The explanatory and predictive dimensions of model validation must be considered separately: A model can eventually be simultaneously declared predictively validated and theoretically refuted;
  5. When validity cannot be assessed in an absolute way, the value of a model can be examined from a comparative perspective;
  6. When judging a model’s predictive power, the quantitative predictions are at least to be judged based on measurements that have not been used for model calibration, and possibly on measurements requiring a higher extrapolation capacity;
  7. An assessment of the model’s predictive uncertainty can be helpful with the validation process.

Conflicts of interest

The author has no conflict of interest to declare.

Acknowledgements

The author acknowledges the review of two anonymous referees, which helped him improve significantly his manuscript.

1 Note that Klemeš never claimed to have invented the concept [see, e.g., Larson 1931; Mosteller and Tukey 1988]: he wrote that the SST “contains no new and original ideas; it is merely an attempt to present an organized methodology based on standard techniques, a methodology that can be viewed as a generalization of the routine split sample test”. But hydrologists still refer very often to his article, which is by far the most cited of his papers (over 750 citations as of December 2021), and SST during the last decade has seen a resurgence of interest [see, e.g., Coron et al. 2012; Seifert et al. 2012; Teutschbein and Seibert 2013; Thirel et al. 2015; Dakhlaoui et al. 2019; Nicolle et al. 2021].

2 Many years after publishing his famous paper, Klemeš (personal communication) wrote to us that he had in fact always been skeptical about the capacity of hydrologists to validate rigorously their model. He wrote that he knew in advance that the tests he had suggested would be “avoided under whatever excuses available because modelers, especially those who want to ‘market’ their products, know only too well that they would not pass it.” He concluded: “I had no illusions in this regard when I wrote my paper, but the logic of modelling led me to develop the ‘testing principle’ to its, let’s say, ‘theoretical limit’.”


Bibliographie

[Andréassian et al., 2006] V. Andréassian; A. Hall; N. Chahinian; J. Schaake Introduction and synthesis: Why should hydrologists work on a large number of basin data sets?, Large Sample Basin Experiments for Hydrological Model Parameterization: Results of the Model Parameter Experiment—MOPEX, Volume 307, IAHS Publ., 2006, pp. 1-5

[Andréassian et al., 2009] V. Andréassian; C. Perrin; L. Berthet; N. Le Moine; J. Lerat; C. Loumagne; L. Oudin; T. Mathevet; M. H. Ramos; A. Valéry Crash tests for a standardized evaluation of hydrological models, Hydrol. Earth Syst. Sci., Volume 13 (2009), pp. 1757-1764 | DOI

[Andréassian et al., 2010] V. Andréassian; C. Perrin; E. Parent; A. Bardossy Editorial—the court of miracles of hydrology: can failure stories contribute to hydrological science?, Hydrol. Sci. J., Volume 55 (2010) no. 6, pp. 849-856 | DOI

[Barnett et al., 2012] B. Barnett; R. Townley; V. Post; R. Evans; R. J. Hunt; L. Peeters; S. Richardson; A. D. Werner; A. Knapton; A. Boronkay Australian groundwater modelling guidelines, 2012 (Report no 82. National Water Commission, Canberra)

[Beven, 2001] K. Beven On explanatory depth and predictive power, Hydrol. Process., Volume 15 (2001), pp. 3069-3072 | DOI

[Beven, 2019a] K. Beven Invalidation of models and fitness-for-purpose: A rejectionist approach, Computer Simulation Validation (C. Beisbart; N. J. Saam, eds.), Springer, Cham, 2019, pp. 145-171 | DOI

[Beven, 2019b] K. Beven Validation and equifinality, Computer Simulation Validation (C. Beisbart; N. J. Saam, eds.), Springer, Cham, 2019, pp. 791-809 | DOI

[Biondi et al., 2012] D. Biondi; G. Freni; V. Iacobellis; G. Mascaro; A. Montanari Validation of hydrological models: Conceptual basis, methodological approaches and a proposal for a code of practice, Phys. Chem. Earth, Volume 42–44 (2012), pp. 70-76 | DOI

[Carrera et al., 1993] J. Carrera; S. F. Mousavi; E. J. Usunoff; X. Sánchez-Vila; G. Galarza A discussion on validation of hydrogeological models, Reliab. Eng. Syst. Saf., Volume 42 (1993), pp. 201-216 | DOI

[Caswell, 1976] H. Caswell The validation problem, Systems Analysis and Simulation in Ecology (B. Patten, ed.), Volume IV, Academic Press, New York, NY, 1976, pp. 313-325 | DOI

[Coron et al., 2012] L. Coron; V. Andréassian; C. Perrin; J. Lerat; J. Vaze; M. Bourqui; F. Hendrickx Crash testing hydrological models in contrasted climate conditions: An experiment on 216 Australian catchments, Water Resour. Res., Volume 48 (2012), W05552 | DOI

[Dakhlaoui et al., 2019] H. Dakhlaoui; D. Ruelland; Y. Tramblay A bootstrap-based differential split-sample test to assess the transferability of conceptual rainfall-runoff models under past and future climate variability, J. Hydrol., Volume 575 (2019), pp. 470-486 | DOI

[Delhomme, 1979] J. P. Delhomme Spatial variability and uncertainty in groundwater flow parameters: a geostatistical approach, Water Resour. Res., Volume 15 (1979), pp. 269-280 | DOI

[de Marsily et al., 1992] G. de Marsily; P. Combes; P. Goblet Comment on ‘Ground-water models cannot be validated’, by L.F. Konikow and J.D. Bredehoeft, Adv. Water Resour., Volume 15 (1992), pp. 367-369 | DOI

[de Marsily, 1978] G de Marsily De l’identification des systèmes hydro-géologiques, Doctorat d’Etat thesis, Université Pierre et Marie Curie, Paris (1978)

[Doherty, 2011] J. Doherty Modeling: Picture perfect or abstract art?, Groundwater, Volume 49 (2011) no. 4, p. 455 | DOI

[Gorokhovski and Nute, 1996] V. Gorokhovski; D. Nute Validation of hydrogeological models is impossible: what’s next?, Calibration and Reliability in Groundwater Modelling, Volume 237, IAHS Red Book, 1996, pp. 417-424

[Guillaume et al., 2016] J. H. A. Guillaume; R. J. Hunt; A. Comunian; R. S. Blakers; B. Fu Methods for exploring uncertainty in groundwater management predictions, Integrated Groundwater Management: Concepts, Approaches and Challenges (A. J. Jakeman; O. Barreteau; R. J. Hunt; J.-D. Rinaudo; A. Ross, eds.), Springer International Publishing, Cham, 2016, pp. 711-737 | DOI

[Gupta et al., 2014] H. V. Gupta; C. Perrin; R. Kumar; G. Blöschl; M. Clark; A. Montanari; V. Andréassian Large-sample hydrology: a need to balance depth with breadth, Hydrol. Earth Syst. Sci., Volume 18 (2014), pp. 463-477 | DOI

[Henriksen et al., 2003] H. J. Henriksen; L. Troldborg; P. Nyegaard; T. Sonnenborg; J. C. Refsgaard; B. Madsen Methodology for construction, calibration and validation of a national hydrological model for Denmark, J. Hydrol., Volume 280 (2003), pp. 52-71 | DOI

[Klemeš, 1986] V. Klemeš Operational testing of hydrological simulation models, Hydrol. Sci. J., Volume 31 (1986), pp. 13-24 | DOI

[Konikow and Bredehoeft, 1992a] L. F. Konikow; J. D. Bredehoeft Ground-water models cannot be validated, Adv. Water Resour., Volume 15 (1992), pp. 75-83 | DOI

[Konikow and Bredehoeft, 1992b] L. F. Konikow; J. D. Bredehoeft Reply to comment, Adv. Water Resour., Volume 15 (1992), pp. 371-372

[Larson, 1931] S. C. Larson The shrinkage of the coefficient of multiple correlation, J. Educ. Psychol., Volume 22 (1931), pp. 45-55 | DOI | Zbl

[Mosteller and Tukey, 1988] F. Mosteller; J. W. Tukey Data analysis, including statistics, The Collected Works of John W. Tukey: Graphics 1965-1985, Volume 5, CRC Press, Boca Raton, 1988

[Nicolle et al., 2021] P. Nicolle; V. Andréassian; P. Royer-Gaspard; C. Perrin; G. Thirel; L. Coron; L. Santos Technical note: RAT – a robustness assessment test for calibrated and uncalibrated hydrological models, Hydrol. Earth Syst. Sci., Volume 25 (2021), pp. 5013-5027 | DOI

[Oreskes and Belitz, 2001] N. Oreskes; K. Belitz Philosophical issues in model assessment, Model Validation: Perspectives in Hydrological Science (M. G. Anderson; P. D. Bates, eds.), John Wiley and Sons, Ltd, London, 2001, pp. 23-41

[Oreskes et al., 1994] N. Oreskes; K. Shrader-Frechette; K. Belitz Verification, validation, and confirmation of numerical models in the earth sciences, Science, Volume 263 (1994), pp. 641-646 | DOI

[Oreskes, 1998] N. Oreskes Evaluation (not validation) of quantitative models, Environ. Health Perspect., Volume 106 (1998), pp. 1453-1460 | DOI

[Popper, 1959] K. Popper The Logic of Scientific Discovery, Routldege, London, 1959, 513 pages

[Power, 1993] M. Power The predictive validation of ecological and environmental models, Ecol. Model., Volume 68 (1993), pp. 33-50 | DOI

[Refsgaard and Henriksen, 2004] J. C. Refsgaard; H. J. Henriksen Modelling guidelines—terminology and guiding principles, Adv. Water Resour., Volume 27 (2004), pp. 71-82 | DOI

[Refsgaard and Knudsen, 1996] J. C. Refsgaard; J. Knudsen Operational validation and intercomparison of different types of hydrological models, Water Resour. Res., Volume 32 (1996), pp. 2189-2202 | DOI

[Refsgaard et al., 2005] J. C. Refsgaard; H. J. Henriksen; W. G. Harrar; H. Scholten; A. Kassahun Quality assurance in model based water management—review of existing practice and outline of new approaches, Environ. Model. Softw., Volume 20 (2005), pp. 1201-1215 | DOI

[Refsgaard et al., 2007] J. C. Refsgaard; J. P. van der Sluijs; A. L. Hojberg; P. A. Vanrolleghem Uncertainty in the environmental modelling process—A framework and guidance, Environ. Model. Softw., Volume 22 (2007), pp. 1543-1556 | DOI

[Rykiel, 1996] E. J. Rykiel Testing ecological models: the meaning of validation, Ecol. Model., Volume 90 (1996), pp. 229-244 | DOI

[Saltelli et al., 2000] A. Saltelli; K. Chan; E. M. Scott Sensitivity Analysis, John Wiley, Hoboken, NJ, 2000, 504 pages

[Seifert et al., 2012] D. Seifert; T. O. Sonnenborg; J. C. Refsgaard; A. L. Højberg; L. Troldborg Assessment of hydrological model predictive ability given multiple conceptual geological models, Water Resour. Res., Volume 48 (2012), W06503 | DOI

[Shmueli, 2010] G. Shmueli To explain or to predict?, Stat. Sci., Volume 25 (2010), pp. 289-310 | DOI | MR | Zbl

[Tapley et al., 2004] B. D. Tapley; S. Bettadpur; M. M. Watkins; C. Reigber The gravity recovery and climate experiment; mission overview and early results, Geophys. Res. Lett., Volume 31 (2004) no. 9, L09607 | DOI

[Teutschbein and Seibert, 2013] C. Teutschbein; J. Seibert Is bias correction of regional climate model (RCM) simulations possible for nonstationary conditions?, Hydrol. Earth Syst. Sci., Volume 17 (2013), pp. 5061-5077 | DOI

[Thirel et al., 2015] G. Thirel; V. Andréassian; C. Perrin; J.-N. Audouy; L. Berthet; P. Edwards; N. Folton; C. Furusho; A. Kuentz; J. Lerat; G. Lindström; E. Martin; T. Mathevet; R. Merz; J. Parajka; D. Ruelland; J. Vaze Hydrology under change: an evaluation protocol to investigate how hydrological models deal with changing catchments, Hydrol. Sci. J., Volume 60 (2015), pp. 1184-1199 | DOI


Commentaires - Politique


Ces articles pourraient vous intéresser

Causal models as multiple working hypotheses about environmental processes

Keith Beven

C. R. Géos (2012)


A hydrogeological acrostic: in honour of Ghislain de Marsily

Vazken Andréassian; Valérie Plagnes; Craig Simmons; ...

C. R. Géos (2023)


Multi-objective assessment of hydrological model performances using Nash–Sutcliffe and Kling–Gupta efficiencies on a worldwide large sample of watersheds

Thibault Mathevet; Nicolas Le Moine; Vazken Andréassian; ...

C. R. Géos (2023)