climpred.classes.PerfectModelEnsemble.bootstrap

PerfectModelEnsemble.bootstrap(metric: Optional[Union[str, climpred.metrics.Metric]] = None, comparison: Optional[Union[str, climpred.comparisons.Comparison]] = None, dim: Optional[Union[str, List[str]]] = None, reference: Optional[Union[List[str], str]] = None, groupby: Optional[Union[str, xarray.DataArray]] = None, iterations: Optional[int] = None, sig: int = 95, resample_dim: str = 'member', pers_sig: Optional[int] = None, **metric_kwargs: Optional[Any]) xarray.Dataset[source]

Bootstrap with replacement according to Goddard et al. [2013].

Parameters
  • metric – Metric to apply for verification, see metrics

  • comparison – How to compare the forecast against itself, see comparisons

  • dim – Dimension(s) over which to apply metric. dim is passed on to xskillscore.{metric} and includes xskillscore’s member_dim. dim should contain member when comparison is probabilistic but should not contain member when comparison=e2c. Defaults to None meaning that all dimensions other than lead are reduced.

  • reference – Type of reference forecasts with which to verify against. One or more of ["uninitialized", "persistence", "climatology"]. Defaults to None meaning no reference. If None or [], returns no p value. For persistence, choose between set_options(PerfectModel_persistence_from_initialized_lead_0)=False (default) using compute_persistence() or set_options(PerfectModel_persistence_from_initialized_lead_0)=True using compute_persistence_from_first_lead().

  • iterations – Number of resampling iterations for bootstrapping with replacement. Recommended >= 500.

  • resample_dim – dimension to resample from. Defaults to “member”`.

    • “member”: select a different set of members from forecast

    • “init’: select a different set of initializations from forecast

  • sig – Significance level in percent for deciding whether uninitialized and persistence beat initialized skill.

  • pers_sig – If not None, the separate significance level for persistence. Defaults to None, or the same significance as sig.

  • groupby – group init before passing initialized to bootstrap.

  • **metric_kwargs – arguments passed to metric.

Returns

xarray.Dataset with dimensions results (holding verify skill, p, low_ci and high_ci) and skill (holding initialized, persistence and/or uninitialized):

  • results=”verify skill”, skill=”initialized”:

    mean initialized skill

  • results=”high_ci”, skill=”initialized”:

    high confidence interval boundary for initialized skill

  • results=”p”, skill=”uninitialized”:

    p value of the hypothesis that the difference of skill between the initialized and uninitialized simulations is smaller or equal to zero based on bootstrapping with replacement.

  • results=”p”, skill=”persistence”:

    p value of the hypothesis that the difference of skill between the initialized and persistenceistence simulations is smaller or equal to zero based on bootstrapping with replacement.

Reference:

Goddard et al. [2013]

Example

Continuous Ranked Probability Score ("crps") comparing every member to every other member ("m2m") reducing dimensions member and init 50 times after resampling member dimension with replacement. Also calculate reference skill for the "persistence", "climatology" and "uninitialized" forecast and compare whether initialized skill is better than reference skill: Returns verify skill, probability that reference forecast performs better than initialized and the lower and upper bound of the resample.

>>> PerfectModelEnsemble.bootstrap(
...     metric="crps",
...     comparison="m2m",
...     dim=["init", "member"],
...     iterations=50,
...     resample_dim="member",
...     reference=["persistence", "climatology", "uninitialized"],
... )
<xarray.Dataset>
Dimensions:  (skill: 4, results: 4, lead: 20)
Coordinates:
  * lead     (lead) int64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
  * results  (results) <U12 'verify skill' 'p' 'low_ci' 'high_ci'
  * skill    (skill) <U13 'initialized' 'persistence' ... 'uninitialized'
Data variables:
    tos      (skill, results, lead) float64 0.0621 0.07352 ... 0.1607 0.1439
Attributes: (12/13)
    prediction_skill_software:                         climpred https://climp...
    skill_calculated_by_function:                      PerfectModelEnsemble.b...
    number_of_initializations:                         12
    number_of_members:                                 10
    metric:                                            crps
    comparison:                                        m2m
    ...                                                ...
    reference:                                         ['persistence', 'clima...
    PerfectModel_persistence_from_initialized_lead_0:  False
    resample_dim:                                      member
    sig:                                               95
    iterations:                                        50
    confidence_interval_levels:                        0.975-0.025