API Reference

This page provides an auto-generated summary of climpred’s API. For more details and examples, refer to the relevant chapters in the main part of the documentation.

High-Level Classes

A primary feature of climpred is our prediction ensemble objects, HindcastEnsemble and PerfectModelEnsemble. Users can append their initialized ensemble to these classes, as well as an arbitrary number of verification products (assimilations, reconstructions, observations), control runs, and uninitialized ensembles.

HindcastEnsemble

A HindcastEnsemble is a prediction ensemble that is initialized off of some form of observations (an assimilation, renanalysis, etc.). Thus, it is anticipated that forecasts are verified against observation-like products. Read more about the terminology here.

HindcastEnsemble(xobj)

An object for climate prediction ensembles initialized by a data-like product.

Add and Retrieve Datasets

HindcastEnsemble.__init__(xobj)

Create a HindcastEnsemble object by inputting output from a prediction ensemble in xarray format.

HindcastEnsemble.add_observations(xobj)

Add verification data against which to verify the initialized ensemble.

HindcastEnsemble.add_uninitialized(xobj)

Add a companion uninitialized ensemble for comparison to verification data.

HindcastEnsemble.get_initialized()

Returns the xarray dataset for the initialized ensemble.

HindcastEnsemble.get_observations()

Returns xarray Datasets of the observations/verification data.

HindcastEnsemble.get_uninitialized()

Returns the xarray dataset for the uninitialized ensemble.

Analysis Functions

HindcastEnsemble.verify([reference, metric, …])

Verifies the initialized ensemble against observations.

HindcastEnsemble.bootstrap([metric, …])

Bootstrap with replacement according to Goddard et al. 2013.

Pre-Processing

HindcastEnsemble.smooth([smooth_kws, how])

Smooth all entries of PredictionEnsemble in the same manner to be able to still calculate prediction skill afterwards.

HindcastEnsemble.remove_bias(alignment[, …])

Calculate and remove bias from HindcastEnsemble.

Visualization

HindcastEnsemble.plot([variable, ax, …])

Plot datasets from PredictionEnsemble.

PerfectModelEnsemble

A PerfectModelEnsemble is a prediction ensemble that is initialized off of a control simulation for a number of randomly chosen initialization dates. Thus, forecasts cannot be verified against real-world observations. Instead, they are compared to one another and to the original control run. Read more about the terminology here.

PerfectModelEnsemble(xobj)

An object for “perfect model” climate prediction ensembles.

Add and Retrieve Datasets

PerfectModelEnsemble.__init__(xobj)

Create a PerfectModelEnsemble object by inputting output from the control run in xarray format.

PerfectModelEnsemble.add_control(xobj)

Add the control run that initialized the climate prediction ensemble.

PerfectModelEnsemble.get_initialized()

Returns the xarray dataset for the initialized ensemble.

PerfectModelEnsemble.get_control()

Returns the control as an xarray dataset.

PerfectModelEnsemble.get_uninitialized()

Returns the xarray dataset for the uninitialized ensemble.

Analysis Functions

PerfectModelEnsemble.verify([metric, …])

Verify initialized predictions against a configuration of other ensemble members.

PerfectModelEnsemble.bootstrap([metric, …])

Bootstrap with replacement according to Goddard et al. 2013.

Generate Data

PerfectModelEnsemble.generate_uninitialized()

Generate an uninitialized ensemble by bootstrapping the initialized prediction ensemble.

Pre-Processing

PerfectModelEnsemble.smooth([smooth_kws, how])

Smooth all entries of PredictionEnsemble in the same manner to be able to still calculate prediction skill afterwards.

Visualization

PerfectModelEnsemble.plot([variable, ax, …])

Plot datasets from PredictionEnsemble.

Direct Function Calls

A user can directly call functions in climpred. This requires entering more arguments, e.g. the initialized ensemble Dataset/xarray.core.dataarray.DataArray directly as well as a verification product. Our object HindcastEnsemble and PerfectModelEnsemble wrap most of these functions, making the analysis process much simpler. Once we have wrapped all of the functions in their entirety, we will likely deprecate the ability to call them directly.

Bootstrap

bootstrap_compute(hind, verif[, hist, …])

Bootstrap compute with replacement.

bootstrap_hindcast(hind, hist, verif[, …])

Bootstrap compute with replacement. Wrapper of

bootstrap_perfect_model(init_pm, control[, …])

Bootstrap compute with replacement. Wrapper of

bootstrap_uninit_pm_ensemble_from_control_cftime(…)

Create a pseudo-ensemble from control run.

bootstrap_uninitialized_ensemble(hind, hist)

Resample uninitialized hindcast from historical members.

dpp_threshold(control[, sig, iterations, dim])

Calc DPP significance levels from re-sampled dataset.

varweighted_mean_period_threshold(control[, …])

Calc the variance-weighted mean period significance levels from re-sampled dataset.

Prediction

compute_hindcast(hind, verif[, metric, …])

Verify hindcast predictions against verification data.

compute_perfect_model(init_pm[, control, …])

Compute a predictability skill score for a perfect-model framework simulation dataset.

Reference

compute_persistence(hind, verif[, metric, …])

Computes the skill of a persistence forecast from a simulation.

compute_uninitialized(hind, uninit, verif[, …])

Verify an uninitialized ensemble against verification data.

compute_climatology(hind[, verif, metric, …])

Computes the skill of a climatology forecast.

Horizon

horizon(cond)

Calculate the predictability horizon based on a condition `cond.

Statistics

decorrelation_time(da[, iterations, dim])

Calculate the decorrelaton time of a time series.

dpp(ds[, dim, m, chunk])

Calculates the Diagnostic Potential Predictability (dpp)

varweighted_mean_period(da[, dim])

Calculate the variance weighted mean period of time series based on xrft.power_spectrum.

Tutorial

load_dataset([name, cache, cache_dir, …])

Load example data or a mask from an online repository.

Preprocessing

load_hindcast([inits, members, preprocess, …])

Load multi-member, multi-initialization hindcast experiment into one xr.Dataset compatible with climpred.

rename_to_climpred_dims(xro)

Rename existing dimension in xr.object xro to CLIMPRED_ENSEMBLE_DIMS from existing dimension names.

rename_SLM_to_climpred_dims(xro)

Rename ensemble dimensions common to SubX or CESM output:

get_path([dir_base_experiment, member, …])

Get the path of a file for MPI-ESM standard output file names and directory.

Smoothing

temporal_smoothing(ds[, tsmooth_kws, how, …])

Apply temporal smoothing by creating rolling smooth-timestep means.

spatial_smoothing_xesmf(ds[, d_lon_lat_kws, …])

Quick regridding function.

Metrics

For a thorough look at our metrics library, please see the metrics page.

Metric(name, function, positive, …[, …])

Master class for all metrics.

_get_norm_factor(comparison)

Get normalization factor for normalizing distance metrics.

_pearson_r(forecast, verif[, dim])

Pearson product-moment correlation coefficient.

_pearson_r_p_value(forecast, verif[, dim])

Probability that forecast and verification data are linearly uncorrelated.

_effective_sample_size(forecast, verif[, dim])

Effective sample size for temporally correlated data.

_pearson_r_eff_p_value(forecast, verif[, dim])

Probability that forecast and verification data are linearly uncorrelated, accounting for autocorrelation.

_spearman_r(forecast, verif[, dim])

Spearman’s rank correlation coefficient.

_spearman_r_p_value(forecast, verif[, dim])

Probability that forecast and verification data are monotonically uncorrelated.

_spearman_r_eff_p_value(forecast, verif[, dim])

Probability that forecast and verification data are monotonically uncorrelated, accounting for autocorrelation.

_mse(forecast, verif[, dim])

Mean Sqaure Error (MSE).

_rmse(forecast, verif[, dim])

Root Mean Sqaure Error (RMSE).

_mae(forecast, verif[, dim])

Mean Absolute Error (MAE).

_median_absolute_error(forecast, verif[, dim])

Median Absolute Error.

_nmse(forecast, verif[, dim])

Normalized MSE (NMSE), also known as Normalized Ensemble Variance (NEV).

_nmae(forecast, verif[, dim])

Normalized Mean Absolute Error (NMAE).

_nrmse(forecast, verif[, dim])

Normalized Root Mean Square Error (NRMSE).

_msess(forecast, verif[, dim])

Mean Squared Error Skill Score (MSESS).

_mape(forecast, verif[, dim])

Mean Absolute Percentage Error (MAPE).

_smape(forecast, verif[, dim])

Symmetric Mean Absolute Percentage Error (sMAPE).

_uacc(forecast, verif[, dim])

Bushuk’s unbiased Anomaly Correlation Coefficient (uACC).

_std_ratio(forecast, verif[, dim])

Ratio of standard deviations of the forecast over the verification data.

_conditional_bias(forecast, verif[, dim])

Conditional bias between forecast and verification data.

_unconditional_bias(forecast, verif[, dim])

Unconditional bias.

_bias_slope(forecast, verif[, dim])

Bias slope between verification data and forecast standard deviations.

_msess_murphy(forecast, verif[, dim])

Murphy’s Mean Square Error Skill Score (MSESS).

_crps(forecast, verif[, dim])

Continuous Ranked Probability Score (CRPS).

_crpss(forecast, verif[, dim])

Continuous Ranked Probability Skill Score.

_crpss_es(forecast, verif[, dim])

Continuous Ranked Probability Skill Score Ensemble Spread.

_brier_score(forecast, verif[, dim])

Brier Score for binary events.

_threshold_brier_score(forecast, verif[, dim])

Brier score of an ensemble for exceeding given thresholds.

_rps(forecast, verif[, dim])

Ranked Probability Score.

_discrimination(forecast, verif[, dim])

Returns the data required to construct the discrimination diagram for an event.

_reliability(forecast, verif[, dim])

Returns the data required to construct the reliability diagram for an event.

_rank_histogram(forecast, verif[, dim])

Rank histogram or Talagrand diagram.

_contingency(forecast, verif[, score, dim])

Contingency table.

_roc(forecast, verif[, dim])

Receiver Operating Characteristic.

Comparisons

For a thorough look at our metrics library, please see the comparisons page.

Comparison(name, function, hindcast, …[, …])

Master class for all comparisons.

_e2o(hind, verif[, metric])

Compare the ensemble mean forecast to the verification data for a HindcastEnsemble setup.

_m2o(hind, verif[, metric])

Compares each ensemble member individually to the verification data for a HindcastEnsemble setup.

_m2m(ds[, metric])

Compare all members to all others in turn while leaving out the verification member.

_m2e(ds[, metric])

Compare all members to ensemble mean while leaving out the reference in

_m2c(ds[, metric])

Compare all other member forecasts to a single member verification, which is the first member.

_e2c(ds[, metric])

Compare ensemble mean forecast to single member verification.

Config

Set options analogous to xarray.

set_options(**kwargs)

Set options for climpred in a controlled context.