climpred.metrics._discrimination

climpred.metrics._discrimination(forecast: xarray.Dataset, verif: xarray.Dataset, dim: Optional[Union[str, List[str]]] = None, **metric_kwargs: Any) xarray.Dataset[source]

Discrimination.

Return the data required to construct the discrimination diagram for an event. The histogram of forecasts likelihood when observations indicate an event has occurred and has not occurred.

Parameters
  • forecast – Raw forecasts with member dimension if logical provided in metric_kwargs. Probability forecasts in [0, 1] if logical is not provided.

  • verif – Verification data without member dim. Raw verification if logical provided, else binary verification.

  • dim – Dimensions to aggregate. Requires member if logical provided in metric_kwargs to create probability forecasts. If logical not provided in metric_kwargs, should not include member. At least one dimension other than ``member``is required.

  • logical – Function with bool result to be applied to verification data and forecasts and then mean("member") to get forecasts and verification data in interval [0, 1]. Passed via metric_kwargs.

  • probability_bin_edges (array_like, optional) – Probability bin edges used to compute the histograms. Bins include the left most edge, but not the right. Passed via metric_kwargs. Defaults to 6 equally spaced edges between 0 and 1+1e-8.

Returns

Discrimination with added dimension event containing the histograms of forecast probabilities when the event was observed and not observed.

Notes

perfect

distinct distributions

Example

Define a boolean/logical: Function for binary scoring:

>>> def pos(x):
...     return x > 0  # checking binary outcomes
...

Option 1. Pass with keyword logical: (especially designed for PerfectModelEnsemble, where binary verification can only be created after comparison)

>>> HindcastEnsemble.verify(
...     metric="discrimination",
...     comparison="m2o",
...     dim=["member", "init"],
...     alignment="same_verifs",
...     logical=pos,
... )
<xarray.Dataset>
Dimensions:               (lead: 10, forecast_probability: 5, event: 2)
Coordinates:
  * lead                  (lead) int32 1 2 3 4 5 6 7 8 9 10
  * forecast_probability  (forecast_probability) float64 0.1 0.3 0.5 0.7 0.9
  * event                 (event) bool True False
    skill                 <U11 'initialized'
Data variables:
    SST                   (lead, event, forecast_probability) float64 0.07407...

Option 2. Pre-process to generate a binary forecast and verification product:

>>> HindcastEnsemble.map(pos).verify(
...     metric="discrimination",
...     comparison="m2o",
...     dim=["member", "init"],
...     alignment="same_verifs",
... )
<xarray.Dataset>
Dimensions:               (lead: 10, forecast_probability: 5, event: 2)
Coordinates:
  * lead                  (lead) int32 1 2 3 4 5 6 7 8 9 10
  * forecast_probability  (forecast_probability) float64 0.1 0.3 0.5 0.7 0.9
  * event                 (event) bool True False
    skill                 <U11 'initialized'
Data variables:
    SST                   (lead, event, forecast_probability) float64 0.07407...

Option 3. Pre-process to generate a probability forecast and binary verification product. because member not present in hindcast, use comparison="e2o" and dim="init":

>>> HindcastEnsemble.map(pos).mean("member").verify(
...     metric="discrimination",
...     comparison="e2o",
...     dim="init",
...     alignment="same_verifs",
... )
<xarray.Dataset>
Dimensions:               (lead: 10, forecast_probability: 5, event: 2)
Coordinates:
  * lead                  (lead) int32 1 2 3 4 5 6 7 8 9 10
  * forecast_probability  (forecast_probability) float64 0.1 0.3 0.5 0.7 0.9
  * event                 (event) bool True False
    skill                 <U11 'initialized'
Data variables:
    SST                   (lead, event, forecast_probability) float64 0.07407...