University of Wollongong
Browse

Probabilistic evaluation of competing climate models

Download (1.11 MB)
preprint
posted on 2024-11-16, 00:34 authored by Amy Braverman, Snigdhansu Chatterjee, Megan Heyman, Noel Cressie
Climate models produce output over decades or longer at high spatial and temporal resolution. Starting values, boundary conditions, greenhouse gas emissions and so forth make the climate model an uncertain representation of the current climate system and, by implication, of the future climate system. Modern observational datasets offer opportunities for evaluation of competing climate models; in this article, we propose evaluation of competing climate models through probabilities. The probabilities are derived from summary statistics of climate model output and observational data, through a statistical resampling technique known as the Wild Scale-Enhanced Bootstrap. Here we compare monthly sequences of CMIP5 model output of average global near- surface temperature to similar sequences obtained from the well known Had- CRUT4 data set. The summary statistics we choose come from working in the space of decorrelated and dimension-reduced wavelet space and regressing wavelet coefficients of model output on wavelet coefficients of observations. The dimension-reduced slope and intercept statistics are bootstrapped to allow a probability to be assigned to each model that reflects its output’s compatibility with observations.

History

Article/chapter number

03-16

Total pages

56

Language

English

Usage metrics

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC