climpred.metrics._contingency#
- climpred.metrics._contingency(forecast, verif, score='table', dim=None, **metric_kwargs)[source]#
Contingency table.
- Parameters:
forecast – Raw forecasts.
verif – Verification data.
dim – Dimensions to aggregate.
score (str) – Score derived from contingency table. Attribute from
Contingency
. Usescore=table
to return a contingency table or any other contingency score, e.g.score=hit_rate
.observation_category_edges (array_like) – Category bin edges used to compute the observations CDFs. Bins include the left most edge, but not the right. Passed via
metric_kwargs
.forecast_category_edges (array_like) – Category bin edges used to compute the forecast CDFs. Bins include the left most edge, but not the right. Passed via metric_kwargs
See also
Example
>>> category_edges = np.array([-0.5, 0.0, 0.5, 1.0]) >>> HindcastEnsemble.verify( ... metric="contingency", ... score="table", ... comparison="m2o", ... dim=["member", "init"], ... alignment="same_verifs", ... observation_category_edges=category_edges, ... forecast_category_edges=category_edges, ... ).isel(lead=[0, 1]).SST <xarray.DataArray 'SST' (lead: 2, observations_category: 3, forecasts_category: 3)> array([[[221, 29, 0], [ 53, 217, 0], [ 0, 0, 0]], [[234, 16, 0], [ 75, 194, 1], [ 0, 0, 0]]]) Coordinates: observations_category_bounds (observations_category) <U11 '[-0.5, 0.0)' ... forecasts_category_bounds (forecasts_category) <U11 '[-0.5, 0.0)' ...... * observations_category (observations_category) int64 1 2 3 * forecasts_category (forecasts_category) int64 1 2 3 * lead (lead) int32 1 2 skill <U11 'initialized' Attributes: units: None
>>> # contingency-based dichotomous accuracy score >>> category_edges = np.array([9.5, 10.0, 10.5]) >>> PerfectModelEnsemble.verify( ... metric="contingency", ... score="hit_rate", ... comparison="m2c", ... dim=["member", "init"], ... observation_category_edges=category_edges, ... forecast_category_edges=category_edges, ... ) <xarray.Dataset> Dimensions: (lead: 20) Coordinates: * lead (lead) int64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Data variables: tos (lead) float64 1.0 1.0 1.0 1.0 0.9091 ... 1.0 1.0 1.0 nan 1.0 Attributes: prediction_skill_software: climpred https://climpred.readthedocs.io/ skill_calculated_by_function: PerfectModelEnsemble.verify() number_of_initializations: 12 number_of_members: 10 metric: contingency comparison: m2c dim: ['member', 'init'] reference: [] score: hit_rate observation_category_edges: [ 9.5 10. 10.5] forecast_category_edges: [ 9.5 10. 10.5]