API Reference#
This page provides an auto-generated summary of climpred
’s API.
For more details and examples, refer to the relevant chapters in the main part of the
documentation.
High-Level Classes#
A primary feature of climpred
is our prediction ensemble objects,
HindcastEnsemble
and PerfectModelEnsemble
.
Users can add their initialized ensemble to these classes, as well as verification
products (assimilations, reconstructions, observations), control runs, and uninitialized
ensembles.
PredictionEnsemble#
PredictionEnsemble
is the base class for HindcastEnsemble
and
PerfectModelEnsemble
. PredictionEnsemble
cannot be called
directly, but HindcastEnsemble
and PerfectModelEnsemble
inherit the common base functionality.
|
The main object |
|
Create a |
Builtin#
Return number of all variables |
|
Iterate over underlying |
|
Remove a variable from |
|
Check variable in |
|
|
Add. |
|
Sub. |
|
Mul. |
Div. |
|
|
Allow subsetting variable(s) from |
Allow for |
Properties#
Return coordinates of |
|
Bytes sizes of all PredictionEnsemble._datasets. |
|
Return sizes of |
|
Return dimension of |
|
Return chunks of |
|
Return chunksizes of |
|
Return data variables of |
|
|
Check if |
|
Check if |
HindcastEnsemble#
A HindcastEnsemble
is a prediction ensemble that is
initialized off of some form of observations (an assimilation, reanalysis, etc.). Thus,
it is anticipated that forecasts are verified against observation-like products. Read
more about the terminology here.
|
An object for initialized prediction ensembles. |
|
Create |
Add and Retrieve Datasets#
Add verification data against which to verify the initialized ensemble. |
|
Add a companion uninitialized ensemble for comparison to verification data. |
|
Return the |
|
Return the |
|
Return the |
Analysis Functions#
|
Verify the initialized ensemble against observations. |
|
Bootstrap with replacement according to Goddard et al. [2013]. |
Generate Data#
Generate |
Pre-Processing#
|
Smooth in space and/or aggregate in time in |
|
Remove bias from |
Remove seasonal cycle from |
Visualization#
|
Plot datasets from |
|
Plot |
PerfectModelEnsemble#
A PerfectModelEnsemble
is a prediction ensemble that is initialized off of a
control simulation for a number of randomly chosen initialization dates. Thus,
forecasts cannot be verified against real-world observations.
Instead, they are compared to one another and to the
original control run. Read more about the terminology here.
|
An object for "perfect model" prediction ensembles. |
|
Create a |
Add and Retrieve Datasets#
|
Add the control run that initialized the prediction ensemble. |
Return the |
|
Return the control as an |
|
Return the |
Analysis Functions#
|
Verify initialized predictions against a configuration of its members. |
|
Bootstrap with replacement according to Goddard et al. [2013]. |
Generate Data#
Generate an uninitialized ensemble by resampling from the control simulation. |
Pre-Processing#
|
Smooth in space and/or aggregate in time in |
Remove seasonal cycle from |
Visualization#
|
Plot datasets from |
Direct Function Calls#
While not encouraged anymore, a user can directly call functions in climpred
.
This requires entering more arguments, e.g. the initialized ensemble directly as
well as a verification product. Our object
HindcastEnsemble
and
PerfectModelEnsemble
wrap most of these functions, making
the analysis process much simpler.
Bootstrap#
Create a pseudo-ensemble from control run. |
|
Resample uninitialized hindcast from historical members. |
|
|
Calc DPP significance levels from re-sampled dataset. |
|
Calc variance-weighted mean period significance levels from resampled dataset. |
Prediction#
|
Compute a predictability skill score in a perfect-model framework. |
Reference#
|
Compute the skill of a persistence forecast from a simulation. |
|
Compute persistence skill based on first |
|
Verify an uninitialized ensemble against verification data. |
|
Compute the skill of a climatology forecast. |
Horizon#
|
Calculate the predictability horizon based on a condition |
Statistics#
|
Calculate the decorrelaton time of a time series. |
|
Calculate the Diagnostic Potential Predictability (DPP). |
|
Calculate the variance weighted mean period of time series. |
|
Remove degree polynomial of degree |
|
Remove degree polynomial along dimension |
Tutorial#
|
Load example data or a mask from an online repository. |
Preprocessing#
|
Concat multi-member, multi-initialization hindcast experiment. |
Rename existing dimension to CLIMPRED_ENSEMBLE_DIMS. |
|
Rename ensemble dimensions common to SubX or CESM output. |
|
|
Set time axis to integers starting from offset. |
|
Get the path of a file for MPI-ESM standard output file names and directory. |
Smoothing#
|
Apply temporal smoothing by creating rolling smooth-timestep means. |
|
Quick regridding function. |
Visualization#
|
Plot Ensemble Prediction skill as in Li et al. 2016 Fig.3a-c. |
|
Plot datasets from PerfectModelEnsemble. |
|
Plot datasets from HindcastEnsemble. |
Utils#
Convert |
|
Convert |
Metrics#
For a thorough look at our metrics library, please see the metrics page.
|
Master class for all metrics. |
|
Metric initialization. |
Show metadata of metric class. |
|
|
Get normalization factor for normalizing distance metrics. |
|
Pearson product-moment correlation coefficient. |
|
Probability that forecast and verification data are linearly uncorrelated. |
|
Effective sample size for temporally correlated data. |
|
pearson_r_p_value accounting for autocorrelation. |
|
Spearman's rank correlation coefficient. |
|
Probability that forecast and verification data are monotonically uncorrelated. |
|
_spearman_r_p_value accounting for autocorrelation. |
|
Mean Sqaure Error (MSE). |
|
Root Mean Sqaure Error (RMSE). |
|
Mean Absolute Error (MAE). |
|
Median Absolute Error. |
|
Compte Normalized MSE (NMSE), also known as Normalized Ensemble Variance (NEV). |
|
Compute Normalized Mean Absolute Error (NMAE). |
|
Compute Normalized Root Mean Square Error (NRMSE). |
|
Mean Squared Error Skill Score (MSESS). |
|
Mean Absolute Percentage Error (MAPE). |
|
Symmetric Mean Absolute Percentage Error (sMAPE). |
|
Bushuk's unbiased Anomaly Correlation Coefficient (uACC). |
|
Ratio of standard deviations of the forecast over the verification data. |
|
Conditional bias between forecast and verification data. |
|
Unconditional additive bias. |
|
Bias slope between verification data and forecast standard deviations. |
|
Murphy's Mean Square Error Skill Score (MSESS). |
|
Continuous Ranked Probability Score (CRPS). |
|
Continuous Ranked Probability Skill Score. |
|
Continuous Ranked Probability Skill Score Ensemble Spread. |
|
Brier Score for binary events. |
|
Brier score of an ensemble for exceeding given thresholds. |
|
Ranked Probability Score. |
|
Discrimination. |
|
Reliability. |
|
Rank histogram or Talagrand diagram. |
|
Contingency table. |
|
Receiver Operating Characteristic. |
|
Ensemble spread taking the standard deviation over the member dimension. |
|
Multiplicative bias. |
|
Logarithmic Ensemble Spread Score. |
Comparisons#
For a thorough look at our metrics library, please see the comparisons page.
|
Master class for all comparisons. |
|
Comparison initialization See Comparisons. |
Show metadata of comparison class. |
|
|
Compare the ensemble mean forecast to the verification data. |
|
Compare each ensemble member individually to the verification data. |
|
Compare all members to all others in turn while leaving out verification member. |
|
Compare all members to ensemble mean while leaving out the verif in ensemble mean. |
|
Compare all other member forecasts to a single member verification. |
|
Compare ensemble mean forecast to single member verification. |
Config#
Set options analogous to xarray.
|
Set options for |