Metrics

All high-level functions like verify() and bootstrap() (for both HindcastEnsemble and PerfectModelEnsemble objects) have a metric argument that has to be called to determine which metric is used in computing predictability.

Note

We use the phrase ‘observations’ o here to refer to the ‘truth’ data to which we compare the forecast f. These metrics can also be applied relative to a control simulation, reconstruction, observations, etc. This would just change the resulting score from quantifying skill to quantifying potential predictability.

Internally, all metric functions require forecast and observations as inputs. The dimension dim has to be set to specify over which dimensions the metric is applied and are hence reduced. See Comparisons for more on the dim argument.

Deterministic

Deterministic metrics assess the forecast as a definite prediction of the future, rather than in terms of probabilities. Another way to look at deterministic metrics is that they are a special case of probabilistic metrics where a value of one is assigned to one category and zero to all others [Jolliffe2011].

Correlation Metrics

The below metrics rely fundamentally on correlations in their computation. In the literature, correlation metrics are typically referred to as the Anomaly Correlation Coefficient (ACC). This implies that anomalies in the forecast and observations are being correlated. Typically, this is computed using the linear Pearson Product-Moment Correlation. However, climpred also offers the Spearman’s Rank Correlation.

Note that the p value associated with these correlations is computed via a separate metric. Use pearson_r_p_value or spearman_r_p_value to compute p values assuming that all samples in the correlated time series are independent. Use pearson_r_eff_p_value or spearman_r_eff_p_value to account for autocorrelation in the time series by calculating the effective_sample_size.

Pearson Product-Moment Correlation Coefficient

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [1]: print(f"\n\nKeywords: {metric_aliases['pearson_r']}")


Keywords: ['pearson_r', 'pr', 'acc', 'pacc']
climpred.metrics._pearson_r(forecast, verif, dim=None, **metric_kwargs)[source]

Pearson product-moment correlation coefficient.

A measure of the linear association between the forecast and verification data that is independent of the mean and variance of the individual distributions. This is also known as the Anomaly Correlation Coefficient (ACC) when correlating anomalies.

corr = \frac{cov(f, o)}{\sigma_{f}\cdot\sigma_{o}},

where \sigma_{f} and \sigma_{o} represent the standard deviation of the forecast and verification data over the experimental period, respectively.

Note

Use metric pearson_r_p_value or pearson_r_eff_p_value to get the corresponding p value.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.pearson_r

Details:

minimum

-1.0

maximum

1.0

perfect

1.0

orientation

positive

See also

  • xskillscore.pearson_r

  • xskillscore.pearson_r_p_value

  • climpred.pearson_r_p_value

  • climpred.pearson_r_eff_p_value

Pearson Correlation p value

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [2]: print(f"\n\nKeywords: {metric_aliases['pearson_r_p_value']}")


Keywords: ['pearson_r_p_value', 'p_pval', 'pvalue', 'pval']
climpred.metrics._pearson_r_p_value(forecast, verif, dim=None, **metric_kwargs)[source]

Probability that forecast and verification data are linearly uncorrelated.

Two-tailed p value associated with the Pearson product-moment correlation coefficient (pearson_r), assuming that all samples are independent. Use pearson_r_eff_p_value to account for autocorrelation in the forecast and verification data.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.pearson_r_p_value

Details:

minimum

0.0

maximum

1.0

perfect

1.0

orientation

negative

See also

  • xskillscore.pearson_r

  • xskillscore.pearson_r_p_value

  • climpred.pearson_r

  • climpred.pearson_r_eff_p_value

Effective Sample Size

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [3]: print(f"\n\nKeywords: {metric_aliases['effective_sample_size']}")


Keywords: ['effective_sample_size', 'n_eff', 'eff_n']
climpred.metrics._effective_sample_size(forecast, verif, dim=None, **metric_kwargs)[source]

Effective sample size for temporally correlated data.

Note

Weights are not included here due to the dependence on temporal autocorrelation.

Note

This metric can only be used for hindcast-type simulations.

The effective sample size extracts the number of independent samples between two time series being correlated. This is derived by assessing the magnitude of the lag-1 autocorrelation coefficient in each of the time series being correlated. A higher autocorrelation induces a lower effective sample size which raises the correlation coefficient for a given p value.

The effective sample size is used in computing the effective p value. See pearson_r_eff_p_value and spearman_r_eff_p_value.

N_{eff} = N\left( \frac{1 -
           \rho_{f}\rho_{o}}{1 + \rho_{f}\rho_{o}} \right),

where \rho_{f} and \rho_{o} are the lag-1 autocorrelation coefficients for the forecast and verification data.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.effective_sample_size

Details:

minimum

0.0

maximum

perfect

N/A

orientation

positive

Reference:
  • Bretherton, Christopher S., et al. “The effective number of spatial degrees of freedom of a time-varying field.” Journal of climate 12.7 (1999): 1990-2009.

Pearson Correlation Effective p value

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [4]: print(f"\n\nKeywords: {metric_aliases['pearson_r_eff_p_value']}")


Keywords: ['pearson_r_eff_p_value', 'p_pval_eff', 'pvalue_eff', 'pval_eff']
climpred.metrics._pearson_r_eff_p_value(forecast, verif, dim=None, **metric_kwargs)[source]

Probability that forecast and verification data are linearly uncorrelated, accounting for autocorrelation.

Note

Weights are not included here due to the dependence on temporal autocorrelation.

Note

This metric can only be used for hindcast-type simulations.

The effective p value is computed by replacing the sample size N in the t-statistic with the effective sample size, N_{eff}. The same Pearson product-moment correlation coefficient r is used as when computing the standard p value.

t = r\sqrt{ \frac{N_{eff} - 2}{1 - r^{2}} },

where N_{eff} is computed via the autocorrelation in the forecast and verification data.

N_{eff} = N\left( \frac{1 -
           \rho_{f}\rho_{o}}{1 + \rho_{f}\rho_{o}} \right),

where \rho_{f} and \rho_{o} are the lag-1 autocorrelation coefficients for the forecast and verification data.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.pearson_r_eff_p_value

Details:

minimum

0.0

maximum

1.0

perfect

1.0

orientation

negative

See also

  • climpred.effective_sample_size

  • climpred.spearman_r_eff_p_value

Reference:
  • Bretherton, Christopher S., et al. “The effective number of spatial degrees of freedom of a time-varying field.” Journal of climate 12.7 (1999): 1990-2009.

Spearman’s Rank Correlation Coefficient

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [5]: print(f"\n\nKeywords: {metric_aliases['spearman_r']}")


Keywords: ['spearman_r', 'sacc', 'sr']
climpred.metrics._spearman_r(forecast, verif, dim=None, **metric_kwargs)[source]

Spearman’s rank correlation coefficient.

corr = \mathrm{pearsonr}(ranked(f), ranked(o))

This correlation coefficient is nonparametric and assesses how well the relationship between the forecast and verification data can be described using a monotonic function. It is computed by first ranking the forecasts and verification data, and then correlating those ranks using the pearson_r correlation.

This is also known as the anomaly correlation coefficient (ACC) when comparing anomalies, although the Pearson product-moment correlation coefficient (pearson_r) is typically used when computing the ACC.

Note

Use metric spearman_r_p_value or spearman_r_eff_p_value to get the corresponding p value.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.spearman_r

Details:

minimum

-1.0

maximum

1.0

perfect

1.0

orientation

positive

See also

  • xskillscore.spearman_r

  • xskillscore.spearman_r_p_value

  • climpred.spearman_r_p_value

  • climpred.spearman_r_eff_p_value

Spearman’s Rank Correlation Coefficient p value

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [6]: print(f"\n\nKeywords: {metric_aliases['spearman_r_p_value']}")


Keywords: ['spearman_r_p_value', 's_pval', 'spvalue', 'spval']
climpred.metrics._spearman_r_p_value(forecast, verif, dim=None, **metric_kwargs)[source]

Probability that forecast and verification data are monotonically uncorrelated.

Two-tailed p value associated with the Spearman’s rank correlation coefficient (spearman_r), assuming that all samples are independent. Use spearman_r_eff_p_value to account for autocorrelation in the forecast and verification data.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.spearman_r_p_value

Details:

minimum

0.0

maximum

1.0

perfect

1.0

orientation

negative

See also

  • xskillscore.spearman_r

  • xskillscore.spearman_r_p_value

  • climpred.spearman_r

  • climpred.spearman_r_eff_p_value

Spearman’s Rank Correlation Effective p value

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [7]: print(f"\n\nKeywords: {metric_aliases['spearman_r_eff_p_value']}")


Keywords: ['spearman_r_eff_p_value', 's_pval_eff', 'spvalue_eff', 'spval_eff']
climpred.metrics._spearman_r_eff_p_value(forecast, verif, dim=None, **metric_kwargs)[source]

Probability that forecast and verification data are monotonically uncorrelated, accounting for autocorrelation.

Note

Weights are not included here due to the dependence on temporal autocorrelation.

Note

This metric can only be used for hindcast-type simulations.

The effective p value is computed by replacing the sample size N in the t-statistic with the effective sample size, N_{eff}. The same Spearman’s rank correlation coefficient r is used as when computing the standard p value.

t = r\sqrt{ \frac{N_{eff} - 2}{1 - r^{2}} },

where N_{eff} is computed via the autocorrelation in the forecast and verification data.

N_{eff} = N\left( \frac{1 -
           \rho_{f}\rho_{o}}{1 + \rho_{f}\rho_{o}} \right),

where \rho_{f} and \rho_{o} are the lag-1 autocorrelation coefficients for the forecast and verification data.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.spearman_r_eff_p_value

Details:

minimum

0.0

maximum

1.0

perfect

1.0

orientation

negative

See also

  • climpred.effective_sample_size

  • climpred.pearson_r_eff_p_value

Reference:
  • Bretherton, Christopher S., et al. “The effective number of spatial degrees of freedom of a time-varying field.” Journal of climate 12.7 (1999): 1990-2009.

Distance Metrics

This class of metrics simply measures the distance (or difference) between forecasted values and observed values.

Mean Squared Error (MSE)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [8]: print(f"\n\nKeywords: {metric_aliases['mse']}")


Keywords: ['mse']
climpred.metrics._mse(forecast, verif, dim=None, **metric_kwargs)[source]

Mean Sqaure Error (MSE).

MSE = \overline{(f - o)^{2}}

The average of the squared difference between forecasts and verification data. This incorporates both the variance and bias of the estimator. Because the error is squared, it is more sensitive to large forecast errors than mae, and thus a more conservative metric. For example, a single error of 2°C counts the same as two 1°C errors when using mae. On the other hand, the 2°C error counts double for mse. See Jolliffe and Stephenson, 2011.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.mse

Details:

minimum

0.0

maximum

perfect

0.0

orientation

negative

See also

  • xskillscore.mse

Reference:
  • Ian T. Jolliffe and David B. Stephenson. Forecast Verification: A Practitioner’s Guide in Atmospheric Science. John Wiley & Sons, Ltd, Chichester, UK, December 2011. ISBN 978-1-119-96000-3 978-0-470-66071-3. URL: http://doi.wiley.com/10.1002/9781119960003.

Root Mean Square Error (RMSE)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [9]: print(f"\n\nKeywords: {metric_aliases['rmse']}")


Keywords: ['rmse']
climpred.metrics._rmse(forecast, verif, dim=None, **metric_kwargs)[source]

Root Mean Sqaure Error (RMSE).

RMSE = \sqrt{\overline{(f - o)^{2}}}

The square root of the average of the squared differences between forecasts and verification data.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.rmse

Details:

minimum

0.0

maximum

perfect

0.0

orientation

negative

See also

  • xskillscore.rmse

Mean Absolute Error (MAE)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [10]: print(f"\n\nKeywords: {metric_aliases['mae']}")


Keywords: ['mae']
climpred.metrics._mae(forecast, verif, dim=None, **metric_kwargs)[source]

Mean Absolute Error (MAE).

MAE = \overline{|f - o|}

The average of the absolute differences between forecasts and verification data. A more robust measure of forecast accuracy than mse which is sensitive to large outlier forecast errors.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.mae

Details:

minimum

0.0

maximum

perfect

0.0

orientation

negative

See also

  • xskillscore.mae

Reference:
  • Ian T. Jolliffe and David B. Stephenson. Forecast Verification: A Practitioner’s Guide in Atmospheric Science. John Wiley & Sons, Ltd, Chichester, UK, December 2011. ISBN 978-1-119-96000-3 978-0-470-66071-3. URL: http://doi.wiley.com/10.1002/9781119960003.

Median Absolute Error

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [11]: print(f"\n\nKeywords: {metric_aliases['median_absolute_error']}")


Keywords: ['median_absolute_error']
climpred.metrics._median_absolute_error(forecast, verif, dim=None, **metric_kwargs)[source]

Median Absolute Error.

median(|f - o|)

The median of the absolute differences between forecasts and verification data. Applying the median function to absolute error makes it more robust to outliers.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.median_absolute_error

Details:

minimum

0.0

maximum

perfect

0.0

orientation

negative

See also

  • xskillscore.median_absolute_error

Normalized Distance Metrics

Distance metrics like mse can be normalized to 1. The normalization factor depends on the comparison type choosen. For example, the distance between an ensemble member and the ensemble mean is half the distance of an ensemble member with other ensemble members. See _get_norm_factor().

Normalized Mean Square Error (NMSE)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [12]: print(f"\n\nKeywords: {metric_aliases['nmse']}")


Keywords: ['nmse', 'nev']
climpred.metrics._nmse(forecast, verif, dim=None, **metric_kwargs)[source]

Normalized MSE (NMSE), also known as Normalized Ensemble Variance (NEV).

Mean Square Error (mse) normalized by the variance of the verification data.

NMSE = NEV = \frac{MSE}{\sigma^2_{o}\cdot fac}
     = \frac{\overline{(f - o)^{2}}}{\sigma^2_{o} \cdot fac},

where fac is 1 when using comparisons involving the ensemble mean (m2e, e2c, e2o) and 2 when using comparisons involving individual ensemble members (m2c, m2m, m2o). See _get_norm_factor().

Note

climpred uses a single-valued internal reference forecast for the NMSE, in the terminology of Murphy 1988. I.e., we use a single climatological variance of the verification data within the experimental window for normalizing MSE.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • comparison (str) – Name comparison needed for normalization factor fac, see _get_norm_factor() (Handled internally by the compute functions)

  • metric_kwargs (dict) – see xskillscore.mse

Details:

minimum

0.0

maximum

perfect

0.0

orientation

negative

better than climatology

0.0 - 1.0

worse than climatology

> 1.0

Reference:
  • Griffies, S. M., and K. Bryan. “A Predictability Study of Simulated North Atlantic Multidecadal Variability.” Climate Dynamics 13, no. 7–8 (August 1, 1997): 459–87. https://doi.org/10/ch4kc4.

  • Murphy, Allan H. “Skill Scores Based on the Mean Square Error and Their Relationships to the Correlation Coefficient.” Monthly Weather Review 116, no. 12 (December 1, 1988): 2417–24. https://doi.org/10/fc7mxd.

Normalized Mean Absolute Error (NMAE)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [13]: print(f"\n\nKeywords: {metric_aliases['nmae']}")


Keywords: ['nmae']
climpred.metrics._nmae(forecast, verif, dim=None, **metric_kwargs)[source]

Normalized Mean Absolute Error (NMAE).

Mean Absolute Error (mae) normalized by the standard deviation of the verification data.

NMAE = \frac{MAE}{\sigma_{o} \cdot fac}
     = \frac{\overline{|f - o|}}{\sigma_{o} \cdot fac},

where fac is 1 when using comparisons involving the ensemble mean (m2e, e2c, e2o) and 2 when using comparisons involving individual ensemble members (m2c, m2m, m2o). See _get_norm_factor().

Note

climpred uses a single-valued internal reference forecast for the NMAE, in the terminology of Murphy 1988. I.e., we use a single climatological standard deviation of the verification data within the experimental window for normalizing MAE.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • comparison (str) – Name comparison needed for normalization factor fac, see _get_norm_factor() (Handled internally by the compute functions)

  • metric_kwargs (dict) – see xskillscore.mae

Details:

minimum

0.0

maximum

perfect

0.0

orientation

negative

better than climatology

0.0 - 1.0

worse than climatology

> 1.0

Reference:
  • Griffies, S. M., and K. Bryan. “A Predictability Study of Simulated North Atlantic Multidecadal Variability.” Climate Dynamics 13, no. 7–8 (August 1, 1997): 459–87. https://doi.org/10/ch4kc4.

  • Murphy, Allan H. “Skill Scores Based on the Mean Square Error and Their Relationships to the Correlation Coefficient.” Monthly Weather Review 116, no. 12 (December 1, 1988): 2417–24. https://doi.org/10/fc7mxd.

Normalized Root Mean Square Error (NRMSE)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [14]: print(f"\n\nKeywords: {metric_aliases['nrmse']}")


Keywords: ['nrmse']
climpred.metrics._nrmse(forecast, verif, dim=None, **metric_kwargs)[source]

Normalized Root Mean Square Error (NRMSE).

Root Mean Square Error (rmse) normalized by the standard deviation of the verification data.

NRMSE = \frac{RMSE}{\sigma_{o}\cdot\sqrt{fac}}
      = \sqrt{\frac{MSE}{\sigma^{2}_{o}\cdot fac}}
      = \sqrt{ \frac{\overline{(f - o)^{2}}}{ \sigma^2_{o}\cdot fac}},

where fac is 1 when using comparisons involving the ensemble mean (m2e, e2c, e2o) and 2 when using comparisons involving individual ensemble members (m2c, m2m, m2o). See _get_norm_factor().

Note

climpred uses a single-valued internal reference forecast for the NRMSE, in the terminology of Murphy 1988. I.e., we use a single climatological variance of the verification data within the experimental window for normalizing RMSE.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • comparison (str) – Name comparison needed for normalization factor fac, see _get_norm_factor() (Handled internally by the compute functions)

  • metric_kwargs (dict) – see xskillscore.rmse

Details:

minimum

0.0

maximum

perfect

0.0

orientation

negative

better than climatology

0.0 - 1.0

worse than climatology

> 1.0

Reference:
  • Bushuk, Mitchell, Rym Msadek, Michael Winton, Gabriel Vecchi, Xiaosong Yang, Anthony Rosati, and Rich Gudgel. “Regional Arctic Sea–Ice Prediction: Potential versus Operational Seasonal Forecast Skill.” Climate Dynamics, June 9, 2018. https://doi.org/10/gd7hfq.

  • Hawkins, Ed, Steffen Tietsche, Jonathan J. Day, Nathanael Melia, Keith Haines, and Sarah Keeley. “Aspects of Designing and Evaluating Seasonal-to-Interannual Arctic Sea-Ice Prediction Systems.” Quarterly Journal of the Royal Meteorological Society 142, no. 695 (January 1, 2016): 672–83. https://doi.org/10/gfb3pn.

  • Murphy, Allan H. “Skill Scores Based on the Mean Square Error and Their Relationships to the Correlation Coefficient.” Monthly Weather Review 116, no. 12 (December 1, 1988): 2417–24. https://doi.org/10/fc7mxd.

Mean Square Error Skill Score (MSESS)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [15]: print(f"\n\nKeywords: {metric_aliases['msess']}")


Keywords: ['msess', 'ppp', 'msss']
climpred.metrics._msess(forecast, verif, dim=None, **metric_kwargs)[source]

Mean Squared Error Skill Score (MSESS).

MSESS = 1 - \frac{MSE}{\sigma^2_{ref} \cdot fac} =
       1 - \frac{\overline{(f - o)^{2}}}{\sigma^2_{ref} \cdot fac},

where fac is 1 when using comparisons involving the ensemble mean (m2e, e2c, e2o) and 2 when using comparisons involving individual ensemble members (m2c, m2m, m2o). See _get_norm_factor().

This skill score can be intepreted as a percentage improvement in accuracy. I.e., it can be multiplied by 100%.

Note

climpred uses a single-valued internal reference forecast for the MSSS, in the terminology of Murphy 1988. I.e., we use a single climatological variance of the verification data within the experimental window for normalizing MSE.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • comparison (str) – Name comparison needed for normalization factor fac, see _get_norm_factor() (Handled internally by the compute functions)

  • metric_kwargs (dict) – see xskillscore.mse

Details:

minimum

-∞

maximum

1.0

perfect

1.0

orientation

positive

better than climatology

> 0.0

equal to climatology

0.0

worse than climatology

< 0.0

Reference:
  • Griffies, S. M., and K. Bryan. “A Predictability Study of Simulated North Atlantic Multidecadal Variability.” Climate Dynamics 13, no. 7–8 (August 1, 1997): 459–87. https://doi.org/10/ch4kc4.

  • Murphy, Allan H. “Skill Scores Based on the Mean Square Error and Their Relationships to the Correlation Coefficient.” Monthly Weather Review 116, no. 12 (December 1, 1988): 2417–24. https://doi.org/10/fc7mxd.

  • Pohlmann, Holger, Michael Botzet, Mojib Latif, Andreas Roesch, Martin Wild, and Peter Tschuck. “Estimating the Decadal Predictability of a Coupled AOGCM.” Journal of Climate 17, no. 22 (November 1, 2004): 4463–72. https://doi.org/10/d2qf62.

  • Bushuk, Mitchell, Rym Msadek, Michael Winton, Gabriel Vecchi, Xiaosong Yang, Anthony Rosati, and Rich Gudgel. “Regional Arctic Sea–Ice Prediction: Potential versus Operational Seasonal Forecast Skill. Climate Dynamics, June 9, 2018. https://doi.org/10/gd7hfq.

Mean Absolute Percentage Error (MAPE)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [16]: print(f"\n\nKeywords: {metric_aliases['mape']}")


Keywords: ['mape']
climpred.metrics._mape(forecast, verif, dim=None, **metric_kwargs)[source]

Mean Absolute Percentage Error (MAPE).

Mean absolute error (mae) expressed as the fractional error relative to the verification data.

MAPE = \frac{1}{n} \sum \frac{|f-o|}{|o|}

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.mape

Details:

minimum

0.0

maximum

perfect

0.0

orientation

negative

See also

  • xskillscore.mape

Symmetric Mean Absolute Percentage Error (sMAPE)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [17]: print(f"\n\nKeywords: {metric_aliases['smape']}")


Keywords: ['smape']
climpred.metrics._smape(forecast, verif, dim=None, **metric_kwargs)[source]

Symmetric Mean Absolute Percentage Error (sMAPE).

Similar to the Mean Absolute Percentage Error (mape), but sums the forecast and observation mean in the denominator.

sMAPE = \frac{1}{n} \sum \frac{|f-o|}{|f|+|o|}

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.smape

Details:

minimum

0.0

maximum

1.0

perfect

0.0

orientation

negative

See also

  • xskillscore.smape

Unbiased Anomaly Correlation Coefficient (uACC)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [18]: print(f"\n\nKeywords: {metric_aliases['uacc']}")


Keywords: ['uacc']
climpred.metrics._uacc(forecast, verif, dim=None, **metric_kwargs)[source]

Bushuk’s unbiased Anomaly Correlation Coefficient (uACC).

This is typically used in perfect model studies. Because the perfect model Anomaly Correlation Coefficient (ACC) is strongly state dependent, a standard ACC (e.g. one computed using pearson_r) will be highly sensitive to the set of start dates chosen for the perfect model study. The Mean Square Skill Score (MESSS) can be related directly to the ACC as MESSS = ACC^(2) (see Murphy 1988 and Bushuk et al. 2019), so the unbiased ACC can be derived as uACC = sqrt(MESSS).

uACC = \sqrt{MSESS}
     = \sqrt{1 - \frac{\overline{(f - o)^{2}}}{\sigma^2_{ref} \cdot fac}},

where fac is 1 when using comparisons involving the ensemble mean (m2e, e2c, e2o) and 2 when using comparisons involving individual ensemble members (m2c, m2m, m2o). See _get_norm_factor().

Note

Because of the square root involved, any negative MSESS values are automatically converted to NaNs.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • comparison (str) – Name comparison needed for normalization factor fac, see _get_norm_factor() (Handled internally by the compute functions)

  • metric_kwargs (dict) – see xskillscore.mse

Details:

minimum

0.0

maximum

1.0

perfect

1.0

orientation

positive

better than climatology

> 0.0

equal to climatology

0.0

Reference:
  • Bushuk, Mitchell, Rym Msadek, Michael Winton, Gabriel Vecchi, Xiaosong Yang, Anthony Rosati, and Rich Gudgel. “Regional Arctic Sea–Ice Prediction: Potential versus Operational Seasonal Forecast Skill.” Climate Dynamics, June 9, 2018. https://doi.org/10/gd7hfq.

  • Allan H. Murphy. Skill Scores Based on the Mean Square Error and Their Relationships to the Correlation Coefficient. Monthly Weather Review, 116(12):2417–2424, December 1988. https://doi.org/10/fc7mxd.

Murphy Decomposition Metrics

Metrics derived in [Murphy1988] which decompose the MSESS into a correlation term, a conditional bias term, and an unconditional bias term. See https://www-miklip.dkrz.de/about/murcss/ for a walk through of the decomposition.

Standard Ratio

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [19]: print(f"\n\nKeywords: {metric_aliases['std_ratio']}")


Keywords: ['std_ratio']
climpred.metrics._std_ratio(forecast, verif, dim=None, **metric_kwargs)[source]

Ratio of standard deviations of the forecast over the verification data.

\text{std ratio} = \frac{\sigma_f}{\sigma_o},

where \sigma_{f} and \sigma_{o} are the standard deviations of the forecast and the verification data over the experimental period, respectively.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xarray.std

Details:

minimum

0.0

maximum

perfect

1.0

orientation

N/A

Reference:

Conditional Bias

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [20]: print(f"\n\nKeywords: {metric_aliases['conditional_bias']}")


Keywords: ['conditional_bias', 'c_b', 'cond_bias']
climpred.metrics._conditional_bias(forecast, verif, dim=None, **metric_kwargs)[source]

Conditional bias between forecast and verification data.

\text{conditional bias} = r_{fo} - \frac{\sigma_f}{\sigma_o},

where \sigma_{f} and \sigma_{o} are the standard deviations of the forecast and verification data over the experimental period, respectively.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.pearson_r and xarray.std

Details:

minimum

-∞

maximum

1.0

perfect

0.0

orientation

negative

Reference:

Unconditional Bias

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [21]: print(f"\n\nKeywords: {metric_aliases['unconditional_bias']}")


Keywords: ['unconditional_bias', 'u_b', 'bias']

Simple bias of the forecast minus the observations.

climpred.metrics._unconditional_bias(forecast, verif, dim=None, **metric_kwargs)[source]

Unconditional bias.

bias = f - o

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over

  • metric_kwargs (dict) – see xarray.mean

Details:

minimum

-∞

maximum

perfect

0.0

orientation

negative

Reference:

Bias Slope

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [22]: print(f"\n\nKeywords: {metric_aliases['bias_slope']}")


Keywords: ['bias_slope']
climpred.metrics._bias_slope(forecast, verif, dim=None, **metric_kwargs)[source]

Bias slope between verification data and forecast standard deviations.

\text{bias slope} = \frac{s_{o}}{s_{f}} \cdot r_{fo},

where r_{fo} is the Pearson product-moment correlation between the forecast and the verification data and s_{o} and s_{f} are the standard deviations of the verification data and forecast over the experimental period, respectively.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.pearson_r and xarray.std

Details:

minimum

0.0

maximum

perfect

1.0

orientation

negative

Reference:

Murphy’s Mean Square Error Skill Score

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [23]: print(f"\n\nKeywords: {metric_aliases['msess_murphy']}")


Keywords: ['msess_murphy', 'msss_murphy']
climpred.metrics._msess_murphy(forecast, verif, dim=None, **metric_kwargs)[source]

Murphy’s Mean Square Error Skill Score (MSESS).

MSESS_{Murphy} = r_{fo}^2 - [\text{conditional bias}]^2 -         [\frac{\text{(unconditional) bias}}{\sigma_o}]^2,

where r_{fo}^{2} represents the Pearson product-moment correlation coefficient between the forecast and verification data and \sigma_{o} represents the standard deviation of the verification data over the experimental period. See conditional_bias and unconditional_bias for their respective formulations.

Parameters
  • forecast (xarray object) – Forecast.

  • verif (xarray object) – Verification data.

  • dim (str) – Dimension(s) to perform metric over.

  • metric_kwargs (dict) – see xskillscore.pearson_r, xarray.mean and xarray.std

Details:

minimum

-∞

maximum

1.0

perfect

1.0

orientation

positive

See also

  • climpred.pearson_r

  • climpred.conditional_bias

  • climpred.unconditional_bias

Reference:

Probabilistic

Probabilistic metrics include the spread of the ensemble simulations in their calculations and assign a probability value between 0 and 1 to their forecasts [Jolliffe2011].

Continuous Ranked Probability Score (CRPS)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [24]: print(f"\n\nKeywords: {metric_aliases['crps']}")


Keywords: ['crps']
climpred.metrics._crps(forecast, verif, dim=None, **metric_kwargs)[source]

Continuous Ranked Probability Score (CRPS).

The CRPS can also be considered as the probabilistic Mean Absolute Error (mae). It compares the empirical distribution of an ensemble forecast to a scalar observation. Smaller scores indicate better skill.

CRPS = \int_{-\infty}^{\infty} (F(f) - H(f - o))^{2} df,

where F(f) is the cumulative distribution function (CDF) of the forecast (since the verification data are not assigned a probability), and H() is the Heaviside step function where the value is 1 if the argument is positive (i.e., the forecast overestimates verification data) or zero (i.e., the forecast equals verification data) and is 0 otherwise (i.e., the forecast is less than verification data).

Note

The CRPS is expressed in the same unit as the observed variable. It generalizes the Mean Absolute Error (MAE), and reduces to the MAE if the forecast is determinstic.

Parameters
  • forecast (xr.object) – Forecast with member dim.

  • verif (xr.object) – Verification data without member dim.

  • dim (list of str) – Dimension to apply metric over. Expects at least member. Other dimensions are passed to xskillscore and averaged.

  • metric_kwargs (dict) – optional, see xskillscore.crps_ensemble

Details:

minimum

0.0

maximum

perfect

0.0

orientation

negative

Reference:

See also

  • properscoring.crps_ensemble

  • xskillscore.crps_ensemble

Example

>>> hindcast.verify(metric='crps', comparison='m2o', dim='member')

Continuous Ranked Probability Skill Score (CRPSS)

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [25]: print(f"\n\nKeywords: {metric_aliases['crpss']}")


Keywords: ['crpss']
climpred.metrics._crpss(forecast, verif, dim=None, **metric_kwargs)[source]

Continuous Ranked Probability Skill Score.

This can be used to assess whether the ensemble spread is a useful measure for the forecast uncertainty by comparing the CRPS of the ensemble forecast to that of a reference forecast with the desired spread.

CRPSS = 1 - \frac{CRPS_{initialized}}{CRPS_{clim}}

Note

When assuming a Gaussian distribution of forecasts, use default gaussian=True. If not gaussian, you may specify the distribution type, xmin/xmax/tolerance for integration (see xskillscore.crps_quadrature).

Parameters
  • forecast (xr.object) – Forecast with member dim.

  • verif (xr.object) – Verification data without member dim.

  • dim (list of str) – Dimension to apply metric over. Expects at least member. Other dimensions are passed to xskillscore and averaged.

  • metric_kwargs (dict) –

    optional gaussian (bool, optional): If True, assume Gaussian distribution for

    baseline skill. Defaults to True.

    see xskillscore.crps_ensemble, xskillscore.crps_gaussian and xskillscore.crps_quadrature

Details:

minimum

-∞

maximum

1.0

perfect

1.0

orientation

positive

better than climatology

> 0.0

worse than climatology

< 0.0

Reference:
  • Matheson, James E., and Robert L. Winkler. “Scoring Rules for Continuous Probability Distributions.” Management Science 22, no. 10 (June 1, 1976): 1087–96. https://doi.org/10/cwwt4g.

  • Gneiting, Tilmann, and Adrian E Raftery. “Strictly Proper Scoring Rules, Prediction, and Estimation.” Journal of the American Statistical Association 102, no. 477 (March 1, 2007): 359–78. https://doi.org/10/c6758w.

Example

>>> hindcast.verify(metric='crpss', comparison='m2o',
        alignment='same_verifs', dim='member')
>>> perfect_model.verify(metric='crpss', comparison='m2m', dim='member',
        gaussian=False, cdf_or_dist=scipy.stats.norm, xminimum=-10,
        xmaximum=10, tol=1e-6)

See also

  • properscoring.crps_ensemble

  • xskillscore.crps_ensemble

Continuous Ranked Probability Skill Score Ensemble Spread

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [26]: print(f"\n\nKeywords: {metric_aliases['crpss_es']}")


Keywords: ['crpss_es']
climpred.metrics._crpss_es(forecast, verif, dim=None, **metric_kwargs)[source]

Continuous Ranked Probability Skill Score Ensemble Spread.

If the ensemble variance is smaller than the observed mse, the ensemble is said to be under-dispersive (or overconfident). An ensemble with variance larger than the verification data indicates one that is over-dispersive (underconfident).

CRPSS = 1 - \frac{CRPS(\sigma^2_f)}{CRPS(\sigma^2_o)}

Parameters
  • forecast (xr.object) – Forecast with member dim.

  • verif (xr.object) – Verification data without member dim.

  • dim (list of str) – Dimension to apply metric over. Expects at least member. Other dimensions are passed to xskillscore and averaged.

  • metric_kwargs (dict) – see xskillscore.crps_ensemble and xskillscore.mse

Details:

minimum

-∞

maximum

0.0

perfect

0.0

orientation

positive

under-dispersive

> 0.0

over-dispersive

< 0.0

Reference:
  • Kadow, Christopher, Sebastian Illing, Oliver Kunst, Henning W. Rust, Holger Pohlmann, Wolfgang A. Müller, and Ulrich Cubasch. “Evaluation of Forecasts by Accuracy and Spread in the MiKlip Decadal Climate Prediction System.” Meteorologische Zeitschrift, December 21, 2016, 631–43. https://doi.org/10/f9jrhw.

Example

>>> hindcast.verify(metric='crpss_es', comparison='m2o',
        alignment='same_verifs', dim='member')

Brier Score

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [27]: print(f"\n\nKeywords: {metric_aliases['brier_score']}")


Keywords: ['brier_score', 'brier', 'bs']
climpred.metrics._brier_score(forecast, verif, dim=None, **metric_kwargs)[source]

Brier Score for binary events.

The Mean Square Error (mse) of probabilistic two-category forecasts where the verification data are either 0 (no occurrence) or 1 (occurrence) and forecast probability may be arbitrarily distributed between occurrence and non-occurrence. The Brier Score equals zero for perfect (single-valued) forecasts and one for forecasts that are always incorrect.

BS(f, o) = (f_1 - o)^2,

where f_1 is the forecast probability of o=1.

Note

The Brier Score requires that the observation is binary, i.e., can be described as one (a “hit”) or zero (a “miss”). So either provide a function with with binary outcomes logical in metric_kwargs or create binary verifs and probability forecasts by hindcast.map(logical).mean(‘member’). This Brier Score is not the original formula given in Brier’s 1950 paper.

Parameters
  • forecast (xr.object) – Raw forecasts with member dimension if logical provided in metric_kwargs. Probability forecasts in [0,1] if logical is not provided.

  • verif (xr.object) – Verification data without member dim. Raw verification if logical provided, else binary verification.

  • dim (list or str) – Dimensions to aggregate. Requires member if logical provided in metric_kwargs to create probability forecasts. If logical not provided in metric_kwargs, should not include member.

  • metric_kwargs (dict) –

    optional logical (callable): Function with bool result to be applied to verification

    data and forecasts and then mean('member') to get forecasts and verification data in interval [0,1].

    see xskillscore.brier_score

Details:

minimum

0.0

maximum

1.0

perfect

0.0

orientation

negative

Reference:

See also

  • properscoring.brier_score

  • xskillscore.brier_score

Example

Define a boolean/logical function for binary scoring:

>>> def pos(x): return x > 0  # checking binary outcomes

Option 1. Pass with keyword logical: (Works also for PerfectModelEnsemble)

>>> hindcast.verify(metric='brier_score', comparison='m2o',
        dim='member', alignment='same_verifs', logical=pos)

Option 2. Pre-process to generate a binary forecast and verification product:

>>> hindcast.map(pos).verify(metric='brier_score',
        comparison='m2o', dim='member', alignment='same_verifs')

Option 3. Pre-process to generate a probability forecast and binary verification product. Because member no present in hindcast, use comparison='e2o' and dim=[]:

>>> hindcast.map(pos).mean('member').verify(metric='brier_score',
        comparison='e2o', dim=[], alignment='same_verifs')

Threshold Brier Score

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [28]: print(f"\n\nKeywords: {metric_aliases['threshold_brier_score']}")


Keywords: ['threshold_brier_score', 'tbs']
climpred.metrics._threshold_brier_score(forecast, verif, dim=None, **metric_kwargs)[source]

Brier score of an ensemble for exceeding given thresholds.

CRPS = \int_f BS(F(f), H(f - o)) df,

where F(o) = \int_{f \leq o} p(f) df is the cumulative distribution function (CDF) of the forecast distribution F, o is a point estimate of the true observation (observational error is neglected), BS denotes the Brier score and H(x) denotes the Heaviside step function, which we define here as equal to 1 for x \geq 0 and 0 otherwise.

Parameters
  • forecast (xr.object) – Forecast with member dim.

  • verif (xr.object) – Verification data without member dim.

  • dim (list of str) – Dimension to apply metric over. Expects at least member. Other dimensions are passed to xskillscore and averaged.

  • threshold (int, float, xr.object) – Threshold to check exceedance, see properscoring.threshold_brier_score.

  • metric_kwargs (dict) – optional, see xskillscore.threshold_brier_score

Details:

minimum

0.0

maximum

1.0

perfect

0.0

orientation

negative

Reference:

See also

  • properscoring.threshold_brier_score

  • xskillscore.threshold_brier_score

Example

>>> hindcast.verify(metric='threshold_brier_score', comparison='m2o',
        dim='member', threshold=.5)
>>> hindcast.verify(metric='threshold_brier_score', comparison='m2o',
        dim='member', threshold=[.3, .7])

Ranked Probability Score

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [29]: print(f"\n\nKeywords: {metric_aliases['rps']}")


Keywords: ['rps']
climpred.metrics._rps(forecast, verif, dim=None, **metric_kwargs)[source]

Ranked Probability Score.

RPS(p, k) = 1/M \sum_{m=1}^{M} [(\sum_{k=1}^{m} p_k) - (\sum_{k=1}^{m}             o_k)]^{2}

Parameters
  • forecast (xr.object) – Raw forecasts with member dimension.

  • verif (xr.object) – Verification data without member dim.

  • dim (list or str) – Dimensions to aggregate. Requires to contain member.

  • category_edges (array_like) – Category bin edges used to compute the CDFs. Bins include the left most edge, but not the right. Passed via metric_kwargs.

Details:

minimum

0.0

maximum

1.0

perfect

0.0

orientation

negative

See also

  • xskillscore.rps

Example

>>> category_edges = np.array([-.5, 0., .5, 1.])
>>> hindcast.verify(metric='rps', comparison='m2o', dim='member',
        alignment='same_verifs', category_edges=category_edges)
>>> perfect_model.verify(metric='rps', comparison='m2c',
        dim='member', category_edges=category_edges)

Reliability

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [30]: print(f"\n\nKeywords: {metric_aliases['reliability']}")


Keywords: ['reliability']
climpred.metrics._reliability(forecast, verif, dim=None, **metric_kwargs)[source]

Returns the data required to construct the reliability diagram for an event. The the relative frequencies of occurrence of an event for a range of forecast probability bins.

Parameters
  • forecast (xr.object) – Raw forecasts with member dimension if logical provided in metric_kwargs. Probability forecasts in [0,1] if logical is not provided.

  • verif (xr.object) – Verification data without member dim. Raw verification if logical provided, else binary verification.

  • dim (list or str) – Dimensions to aggregate. Requires member if logical provided in metric_kwargs to create probability forecasts. If logical not provided in metric_kwargs, should not include member.

  • logical (callable, optional) – Function with bool result to be applied to verification data and forecasts and then mean('member') to get forecasts and verification data in interval [0,1]. Passed via metric_kwargs.

  • probability_bin_edges (array_like, optional) – Probability bin edges used to compute the reliability. Bins include the left most edge, but not the right. Passed via metric_kwargs. Defaults to 6 equally spaced edges between 0 and 1+1e-8.

Returns

The relative frequency of occurrence for each

probability bin

Return type

reliability (xr.object)

Details:

perfect

flat distribution

See also

  • xskillscore.reliability

Example

Define a boolean/logical function for binary scoring:

>>> def pos(x): return x > 0  # checking binary outcomes

Option 1. Pass with keyword logical: (Works also for PerfectModelEnsemble)

>>> hindcast.verify(metric='reliability', comparison='m2o',
        dim=['member','init'], alignment='same_verifs', logical=pos)

Option 2. Pre-process to generate a binary forecast and verification product:

>>> hindcast.map(pos).verify(metric='reliability',
        comparison='m2o', dim=['member','init'], alignment='same_verifs')

Option 3. Pre-process to generate a probability forecast and binary verification product. Because member no present in hindcast, use comparison='e2o' and dim='init':

>>> hindcast.map(pos).mean('member').verify(metric='reliability',
        comparison='e2o', dim='init', alignment='same_verifs')

Discrimination

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [31]: print(f"\n\nKeywords: {metric_aliases['discrimination']}")


Keywords: ['discrimination']
climpred.metrics._discrimination(forecast, verif, dim=None, **metric_kwargs)[source]

Returns the data required to construct the discrimination diagram for an event. The histogram of forecasts likelihood when observations indicate an event has occurred and has not occurred.

Parameters
  • forecast (xr.object) – Raw forecasts with member dimension if logical provided in metric_kwargs. Probability forecasts in [0,1] if logical is not provided.

  • verif (xr.object) – Verification data without member dim. Raw verification if logical provided, else binary verification.

  • dim (list or str) – Dimensions to aggregate. Requires member if logical provided in metric_kwargs to create probability forecasts. If logical not provided in metric_kwargs, should not include member. At least one dimension other than member is required.

  • logical (callable, optional) – Function with bool result to be applied to verification data and forecasts and then mean('member') to get forecasts and verification data in interval [0,1]. Passed via metric_kwargs.

  • probability_bin_edges (array_like, optional) – Probability bin edges used to compute the histograms. Bins include the left most edge, but not the right. Passed via metric_kwargs. Defaults to 6 equally spaced edges between 0 and 1+1e-8.

Returns

Discrimination (xr.object) with added dimension “event” containing the histograms of forecast probabilities when the event was observed and not observed

Details:

perfect

distinct distributions

See also

  • xskillscore.discrimination

Example

Define a boolean/logical function for binary scoring:

>>> def pos(x): return x > 0  # checking binary outcomes

Option 1. Pass with keyword logical: (Works also for PerfectModelEnsemble)

>>> hindcast.verify(metric='discrimination', comparison='m2o',
        dim=['member', 'init'], alignment='same_verifs', logical=pos)

Option 2. Pre-process to generate a binary forecast and verification product:

>>> hindcast.map(pos).verify(metric='discrimination',
        comparison='m2o', dim=['member','init'], alignment='same_verifs')

Option 3. Pre-process to generate a probability forecast and binary verification product. Because member no present in hindcast, use comparison='e2o' and dim='init':

>>> hindcast.map(pos).mean('member').verify(metric='discrimination',
        comparison='e2o', dim='init', alignment='same_verifs')

Rank Histogram

# Enter any of the below keywords in ``metric=...`` for the compute functions.
In [32]: print(f"\n\nKeywords: {metric_aliases['rank_histogram']}")


Keywords: ['rank_histogram']
climpred.metrics._rank_histogram(forecast, verif, dim=None, **metric_kwargs)[source]

Rank histogram or Talagrand diagram.

Parameters
  • forecast (xr.object) – Raw forecasts with member dimension.

  • verif (xr.object) – Verification data without member dim.

  • dim (list or str) – Dimensions to aggregate. Requires to contain member and at least one additional dimension.

Details:

perfect

flat distribution

See also

  • xskillscore.rank_histogram

Example

>>> hindcast.verify(metric='rank_histogram', comparison='m2o',
        dim=['member','init'], alignment='same_verifs')
>>> perfect_model.verify(metric='rank_histogram', comparison='m2c',
        dim=['member','init'])

Contingency-based metrics

A number of metrics can be derived from a contingency table. To use this in climpred, run .verify(metric='contingency', score=...) where score can be chosen from xskillscore.

climpred.metrics._contingency(forecast, verif, score='table', dim=None, **metric_kwargs)[source]

Contingency table.

Parameters
  • forecast (xr.object) – Raw forecasts.

  • verif (xr.object) – Verification data.

  • dim (list or str) – Dimensions to aggregate.

  • score (str) – Score derived from contingency table. Attribute from xskillscore.Contingency. Use score=table to return a contingency table or any other contingency score, e.g. score=hit_rate.

  • observation_category_edges (array_like) – Category bin edges used to compute the observations CDFs. Bins include the left most edge, but not the right. Passed via metric_kwargs.

  • forecast_category_edges (array_like) – Category bin edges used to compute the forecast CDFs. Bins include the left most edge, but not the right. Passed via metric_kwargs

See also

  • xskillscore.Contingency

References

Example

>>> category_edges = np.array([-0.5, 0., .5, 1.])
>>> hindcast.verify(metric='contingency', score='table', comparison='m2o',
        dim=[], alignment='same_verifs',
        observation_category_edges=category_edges,
        forecast_category_edges=category_edges)
>>> perfect_model.verify(metric='contingency', score='hit_rate',
        comparison='m2c', dim=['member','init'],
        observation_category_edges=category_edges,
        forecast_category_edges=category_edges)

User-defined metrics

You can also construct your own metrics via the climpred.metrics.Metric class.

Metric(name, function, positive, …[, …])

Master class for all metrics.

First, write your own metric function, similar to the existing ones with required arguments forecast, observations, dim=None, and **metric_kwargs:

from climpred.metrics import Metric

def _my_msle(forecast, observations, dim=None, **metric_kwargs):
    """Mean squared logarithmic error (MSLE).
    https://peltarion.com/knowledge-center/documentation/modeling-view/build-an-ai-model/loss-functions/mean-squared-logarithmic-error."""
    # function
    return ( (np.log(forecast + 1) + np.log(observations + 1) ) ** 2).mean(dim)

Then initialize this metric function with climpred.metrics.Metric:

_my_msle = Metric(
    name='my_msle',
    function=_my_msle,
    probabilistic=False,
    positive=False,
    unit_power=0,
    )

Finally, compute skill based on your own metric:

skill = hindcast.verify(metric=_my_msle, comparison='e2o', alignment='same_verif', dim='init')

Once you come up with an useful metric for your problem, consider contributing this metric to climpred, so all users can benefit from your metric, see contributing.

References

Jolliffe2011(1,2)

Ian T. Jolliffe and David B. Stephenson. Forecast Verification: A Practitioner’s Guide in Atmospheric Science. John Wiley & Sons, Ltd, Chichester, UK, December 2011. ISBN 978-1-119-96000-3 978-0-470-66071-3. URL: http://doi.wiley.com/10.1002/9781119960003.

Murphy1988

Allan H. Murphy. Skill Scores Based on the Mean Square Error and Their Relationships to the Correlation Coefficient. Monthly Weather Review, 116(12):2417–2424, December 1988. https://doi.org/10/fc7mxd.