Inferential, nonparametric statistics to assess the quality of probabilistic forecast systems
Article
Article Title | Inferential, nonparametric statistics to assess the quality of probabilistic forecast systems |
---|---|
ERA Journal ID | 1985 |
Article Category | Article |
Authors | Maia, Aline de H. N. (Author), Meinke, Holger (Author), Lennox, Sarah (Author) and Stone, Roger (Author) |
Journal Title | Monthly Weather Review |
Journal Citation | 135 (2), pp. 351-362 |
Number of Pages | 12 |
Year | 2007 |
Place of Publication | Boston, MA. United States |
ISSN | 0027-0644 |
1520-0493 | |
Digital Object Identifier (DOI) | https://doi.org/10.1175/MWR3291.1 |
Web Address (URL) | http://journals.ametsoc.org/doi/pdf/10.1175/MWR3291.1 |
Abstract | Many statistical forecast systems are available to interested users. To be useful for decision making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and its statistical manifestation have been firmly established, the forecasts must also provide some quantitative evidence of 'quality'. However, the quality of statistical climate forecast systems (forecast quality) is an ill-defined and frequently misunderstood property. Often, providers and users of such forecast systems are unclear about what quality entails and how to measure it, leading to confusion and misinformation. A generic framework is presented that quantifies aspects of forecast quality using an inferential approach to calculate nominal significance levels (p values), which can be obtained either by directly applying nonparametric statistical tests such as Kruskal–Wallis (KW) or Kolmogorov–Smirnov (KS) or by using Monte Carlo methods (in the case of forecast skill scores). Once converted to p values, these forecast quality measures provide a means to objectively evaluate and compare temporal and spatial patterns of forecast quality across datasets and forecast systems. The analysis demonstrates the importance of providing p values rather than adopting some arbitrarily chosen significance levels such as 0.05 or 0.01, which is still common practice. This is illustrated by applying nonparametric tests (such as KW and KS) and skill scoring methods [linear error in the probability space (LEPS) and ranked probability skill score (RPSS)] to the five-phase Southern Oscillation index classification system using historical rainfall data from Australia, South Africa, and India. The selection of quality measures is solely based on their common use and does not constitute endorsement. It is found that nonparametric statistical tests can be adequate proxies for skill measures such as LEPS or RPSS. The framework can be implemented anywhere, regardless of dataset, forecast system, or quality measure. Eventually such inferential evidence should be complemented by descriptive statistical methods in order to fully assist in operational risk management. |
Keywords | risk management; Monte Carlo method; numerical analysis; Southern Oscillation; nonparametric statistics; meteorology research |
ANZSRC Field of Research 2020 | 490501. Applied statistics |
370202. Climatology | |
350715. Quality management | |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | Brazilian Agricultural Research Corporation, Brazil |
Department of Primary Industries, Queensland | |
Department of Primary Industries and Fisheries, Queensland | |
Australian Centre for Sustainable Catchments |
https://research.usq.edu.au/item/9zq11/inferential-nonparametric-statistics-to-assess-the-quality-of-probabilistic-forecast-systems
1984
total views9
total downloads0
views this month0
downloads this month