You can run this notebook in a live session binder badge or view it on Github.

Demo of Perfect Model Predictability Functions#

This demo demonstrates climpred’s capabilities for a perfect-model framework ensemble simulation with PerfectModelEnsemble.

What’s a perfect-model framework simulation?

A perfect-model framework uses a set of ensemble simulations that are based on a General Circulation Model (GCM) or Earth System Model (ESM) alone. There is no use of any reanalysis, reconstruction, or data product to initialize the decadal prediction ensemble. An arbitrary number of members are initialized from perturbed initial conditions, and the control simulation can be viewed as just another member, in climpred’s view as member 0.

How to compare predictability skill score: As no observational data interferes with the random climate evolution of the model, we cannot use an observation-based reference for computing skill scores. Therefore, we can compare the members with one another ("m2m") [Séférian et al., 2018], against the ensemble mean ("m2e") [Griffies and Bryan, 1997], or against the control ("m2c") [Séférian et al., 2018]. We can also compare the ensemble mean to the control member ("e2c") [Griffies and Bryan, 1997]. See comparisons for more information.

When to use perfect-model frameworks:

  • You don’t have a sufficiently long observational record to use as a reference.

  • You want to avoid biases between model climatology and reanalysis climatology.

  • You want to avoid sensitive reactions of biogeochemical cycles to disruptive changes in ocean physics due to assimilation.

  • You want to delve into process understanding of predictability in a model without outside artifacts.

# linting
%load_ext nb_black
%load_ext lab_black
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr

import climpred

Load sample data

Here we use a subset of ensembles and members from the MPI-ESM-LR (CMIP6 version) esmControl simulation of an early state produce for [Spring and Ilyina, 2020].

1-dimensional output#

Our 1D sample output contains datasets of time series of certain spatially averaged area ('global', 'North_Atlantic') and temporally averaged period (ym, DJF, …) for lead years (1, …, 20).

initialized: The ensemble dataset of all members (1, …, 10), inits (initialization years: 3014, 3023, …, 3257), areas, periods, and lead years.

control: The control dataset with the same areas and periods, as well as the years 3000 to 3299.

# take North Atlantic yearmean
initialized = climpred.tutorial.load_dataset("MPI-PM-DP-1D")
control = climpred.tutorial.load_dataset("MPI-control-1D")
initialized["lead"].attrs = {"units": "years"}

We’ll sub-select annual means ("ym") of sea surface temperature ("tos") in the North Atlantic.

# Add to climpred PerfectModelEnsemble object.
pm = (
    climpred.PerfectModelEnsemble(initialized)
    .add_control(control)
    .sel(area="North_Atlantic", period="ym", drop=True)
)
pm
/Users/aaron.spring/Coding/climpred/climpred/utils.py:191: UserWarning: Assuming annual resolution starting Jan 1st due to numeric inits. Please change ``init`` to a datetime if it is another resolution. We recommend using xr.CFTimeIndex as ``init``, see https://climpred.readthedocs.io/en/stable/setting-up-data.html.
  warnings.warn(
/Users/aaron.spring/Coding/climpred/climpred/utils.py:191: UserWarning: Assuming annual resolution starting Jan 1st due to numeric inits. Please change ``init`` to a datetime if it is another resolution. We recommend using xr.CFTimeIndex as ``init``, see https://climpred.readthedocs.io/en/stable/setting-up-data.html.
  warnings.warn(

climpred.PerfectModelEnsemble

<Initialized Ensemble>
Dimensions:     (lead: 20, init: 12, member: 10)
Coordinates:
  * lead        (lead) int64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
  * init        (init) object 3014-01-01 00:00:00 ... 3257-01-01 00:00:00
  * member      (member) int64 0 1 2 3 4 5 6 7 8 9
    valid_time  (lead, init) object 3015-01-01 00:00:00 ... 3277-01-01 00:00:00
Data variables:
    tos         (lead, init, member) float32 10.78 10.92 11.11 ... 10.83 10.89
    sos         (lead, init, member) float32 33.38 33.37 33.28 ... 33.4 33.48
    AMO         (lead, init, member) float32 0.07232 0.1894 ... -0.01757 0.06075
<Control Simulation>
Dimensions:  (time: 300)
Coordinates:
  * time     (time) object 3000-01-01 00:00:00 ... 3299-01-01 00:00:00
Data variables:
    tos      (time) float32 10.91 10.96 10.93 11.12 ... 10.54 10.52 10.59 10.84
    sos      (time) float32 33.45 33.37 33.42 33.44 ... 33.4 33.46 33.39 33.38
    AMO      (time) float32 0.1678 0.1614 -0.1101 ... 0.01479 -0.02503 0.07905

PredictionEnsemble.plot() displays timeseries for 1-dimensional data.

pm.plot()
<AxesSubplot:title={'center':' '}, xlabel='validity time', ylabel='tos'>
../../_images/768029658b683c95117350746683bfbdc7f97fb7cc525c5b437de7ffc70cfdbf.png

Forecast Verification#

Optionally, PerfectModelEnsemble.verify() (reference=...) verifies against reference forecasts (like "persistence" or "uninitialized").

If uninitialized not present, you can PerfectModelEnsemble.generate_uninitialized() from control.

pm = pm.generate_uninitialized()
from climpred.horizon import horizon

skill = pm.verify(
    metric="acc", comparison="m2e", dim=["init", "member"], reference=["uninitialized"]
)
ph = horizon(skill.sel(skill="initialized") > skill.sel(skill="uninitialized"))
fig, ax = plt.subplots(ncols=3, figsize=(12, 3))
for i, v in enumerate(skill.data_vars):
    fg = skill[v].plot(hue="skill", ax=ax[i])
    # adds gray line at last lead where initialized > uninitialized
    ax[i].axvline(x=ph[v], c="gray", ls=":", label="predictability horizon")
plt.tight_layout()
../../_images/38e2aad457a2bc9cd5c9a50ccdbdcb6c231d470e5a811a8af086325fce2ebaa6.png

Bootstrapping with Replacement#

Here, we bootstrap the ensemble with replacement [Goddard et al., 2013] to compare the initialized ensemble to an uninitialized counterpart and a persistence reference forecast with PerfectModelEnsemble.bootstrap(). The visualization is based on those used in Li et al. [2016]. The p-value demonstrates the probability that the reference forecasts beat the initialized forecast based on N bootstrapping with replacement.

for metric in ["acc", "rmse"]:
    bootstrapped = pm[["tos"]].bootstrap(
        metric=metric,
        comparison="m2e",
        dim=["init", "member"],
        resample_dim="member",
        iterations=21,
        sig=95,
        reference=["uninitialized", "persistence", "climatology"],
    )

    climpred.graphics.plot_bootstrapped_skill_over_leadyear(bootstrapped)
    # adds gray line where last lead p <= 0.05
    ph = horizon(bootstrapped.sel(results="p", skill="uninitialized") <= 0.05)
    plt.axvline(x=ph.tos, c="gray", ls=":", label="predictability horizon")
    plt.title(
        " ".join(["SST", "North Atlantic", "Annual:", metric.upper()]), fontsize=18
    )
    plt.ylabel(metric)
    plt.show()
../../_images/556a60acd12cbfb5bca7c139687983b44d5325e9bcfd769b8931023d06ee156e.png ../../_images/3dddcad72bfc583de5b408bcbccea2b93d53d457101734c323310eb16233102e.png

Computing Skill with Different Comparison Methods#

Here, we use PerfectModelEnsemble.verify() to compute the Anomaly Correlation Coefficient (ACC) climpred.metrics._pearson_r() with different comparison methods. This generates different ACC values by design. See comparison for a description of the various ways to compute skill scores for a perfect-model framework.

for c in ["e2c", "m2c", "m2e", "m2m"]:
    dim = "init" if c == "e2c" else ["init", "member"]
    pm.verify(metric="acc", comparison=c, dim=dim)["tos"].plot(label=c)
# Persistence computation for a baseline.
for r in ["persistence", "climatology"]:
    pm.verify(metric="acc", comparison=c, dim=dim, reference=r)["tos"].sel(
        skill=r
    ).plot(label=r, ls=":")
plt.ylabel("ACC")
plt.xticks(np.arange(1, 21))
plt.legend()
plt.title(
    "Different forecast-reference comparisons for pearson_r"
    "\n lead to systematically different magnitude of skill score"
)
plt.show()
../../_images/33b57a5af75c502a2a1ae4d47c1956fde0fcbfa67ec41cd37103061e91ec0c79.png

3-dimensional output (maps)#

We also have some sample output that contains gridded time series on the curvilinear MPI grid. climpred is indifferent to any dimensions that exist in addition to init, member, and lead. In other words, the functions are set up to make these computations on a grid, if one includes lat, lon, lev, depth, etc.

initialized3d: The ensemble dataset of members (1, … , 4), inits (initialization years: 3014, 3061, 3175, 3237), and lead years (1, …, 5).

control3d: The control dataset spanning (3000, …, 3049).

# Sea surface temperature
initialized3d = climpred.tutorial.load_dataset("MPI-PM-DP-3D")
control3d = climpred.tutorial.load_dataset("MPI-control-3D")

initialized3d["lead"].attrs = {"units": "years"}
# Create climpred PerfectModelEnsemble object.
pm = (
    climpred.PerfectModelEnsemble(initialized3d)
    .add_control(control3d)
    .generate_uninitialized()[["tos"]]
)
/Users/aaron.spring/Coding/climpred/climpred/utils.py:191: UserWarning: Assuming annual resolution starting Jan 1st due to numeric inits. Please change ``init`` to a datetime if it is another resolution. We recommend using xr.CFTimeIndex as ``init``, see https://climpred.readthedocs.io/en/stable/setting-up-data.html.
  warnings.warn(
/Users/aaron.spring/Coding/climpred/climpred/utils.py:191: UserWarning: Assuming annual resolution starting Jan 1st due to numeric inits. Please change ``init`` to a datetime if it is another resolution. We recommend using xr.CFTimeIndex as ``init``, see https://climpred.readthedocs.io/en/stable/setting-up-data.html.
  warnings.warn(

Maps of Skill by Lead Year#

skill = pm.verify(
    metric="mae",
    comparison="m2e",
    dim=["init", "member"],
    reference=["uninitialized", "persistence", "climatology"],
)
skill.tos.plot(col="lead", row="skill", yincrease=False, robust=True)
<xarray.plot.facetgrid.FacetGrid at 0x1464bdc10>
../../_images/7fb7e0e66674f0c41ebd9a87481aacd8a8df87ccd2e480cde972b9fafbdac917.png
# initialized data has only five leads
ph = horizon(skill.sel(skill="initialized") < skill.drop_sel(skill="initialized"))

ph.tos.plot(yincrease=False, levels=np.arange(0.5, 5.51), col="skill")
plt.suptitle("Predictable due to initialization over reference skill for", y=1.02)
plt.show()
../../_images/f5488f28ffb408585c75f0bcf0f691072a589f92e66ed269fa7062a6f2e677c1.png

References#

[1]

L. Goddard, A. Kumar, A. Solomon, D. Smith, G. Boer, P. Gonzalez, V. Kharin, W. Merryfield, C. Deser, S. J. Mason, B. P. Kirtman, R. Msadek, R. Sutton, E. Hawkins, T. Fricker, G. Hegerl, C. a. T. Ferro, D. B. Stephenson, G. A. Meehl, T. Stockdale, R. Burgman, A. M. Greene, Y. Kushnir, M. Newman, J. Carton, I. Fukumori, and T. Delworth. A verification framework for interannual-to-decadal predictions experiments. Climate Dynamics, 40(1-2):245–272, January 2013. doi:10/f4jjvf.

[2] (1,2)

S. M. Griffies and K. Bryan. A predictability study of simulated North Atlantic multidecadal variability. Climate Dynamics, 13(7-8):459–487, August 1997. doi:10/ch4kc4.

[3]

Hongmei Li, Tatiana Ilyina, Wolfgang A. Müller, and Frank Sienz. Decadal predictions of the North Atlantic CO2 uptake. Nature Communications, 7:11076, March 2016. doi:10/f8wkrs.

[4]

Aaron Spring and Tatiana Ilyina. Predictability Horizons in the Global Carbon Cycle Inferred From a Perfect-Model Framework. Geophysical Research Letters, 47(9):e2019GL085311, 2020. doi:10/ggtbv2.

[5] (1,2)

Roland Séférian, Sarah Berthet, and Matthieu Chevallier. Assessing the Decadal Predictability of Land and Ocean Carbon Uptake. Geophysical Research Letters, March 2018. doi:10/gdb424.