climpred v2.1.3 (2021-03-23)¶
map()now does not fail silently when applying a function to all
UserWarning``s are raised. Furthermore, ``PredictionEnsemble.map(func, *args, **kwargs)applies only function to Datasets with matching dims if
dim="dim0_or_dim1"is passed as
**kwargs. (GH#417, GH#437, GH#552) Aaron Spring.
climpred v2.1.2 (2021-01-22)¶
This release is the fixed version for our Journal of Open Source Software (JOSS)
climpred, see review.
climpred v2.1.1 (2020-10-13)¶
This version introduces a lot of breaking changes. We are trying to overhaul
climpred to have an intuitive API that also forces users to think about methodology
choices when running functions. The main breaking changes we introduced are for
verify(). Now, instead of assuming
defaults for most keywords, we require the user to define
alignment (for hindcast systems). We also require users to designate
the number of
iterations for bootstrapping.
Standardize the names of the output coordinates for
initializedshowcases the metric result after comparing the initialized ensemble to the verification data;
uninitializedwhen comparing the uninitialized (historical) ensemble to the verification data;
persistenceis the evaluation of the persistence forecast (GH#460, GH#478, GH#476, GH#480) Aaron Spring.
This release is accompanied by a bunch of new features. Math operations can now be used
PredictionEnsemble objects and their variables
can be sub-selected. Users can now quick plot time series forecasts with these objects.
Bootstrapping is available for
dimensions can be passed to metrics to do things like pattern correlation. New metrics
have been implemented based on Contingency tables. We now include an early version
of bias removal for
verify()allows all dimensions from
dim. This allows e.g. spatial dimensions to be used for pattern correlation. Make sure to use
skipna=Truewhen using spatial dimensions and output has nans (in the case of land, for instance) (GH#282, GH#407) Aaron Spring.
Allow binary forecasts at when calling
verify(), rather than needing to supply binary results beforehand. In other words,
hindcast.verify(metric='brier_score', comparison='m2o', dim='member', logical=logical)is now the same as
hindcast.map(logical).verify(metric='brier_score', comparison='m2o', dim='member'. (GH#431) Aaron Spring.
Check calendar types when using
add_control()to ensure that the verification data calendars match that of the initialized ensemble. (GH#300, GH#452, GH#422, GH#462) Riley X. Brady and Aaron Spring.
Implemented bias removal
remove_bias(how='mean')removes the mean bias of initialized hindcasts with respect to observations. See example. (GH#389, GH#443, GH#459) Aaron Spring and Riley X. Brady.
compute_persistenceno longer in use for
PerfectModelEnsemblein favor of
referencekeyword instead. (GH#436, GH#468, GH#472) Aaron Spring and Riley X. Brady.
Add page on bias removal Aaron Spring.
climpred v2.1.0 (2020-06-08)¶
reference=...keyword. Current options are
'persistence'for a persistence forecast of the observations and
'uninitialized'for an uninitialized/historical reference, such as an uninitialized/forced run. (GH#341) Riley X. Brady.
We now only enforce a union of the initialization dates with observations if
HindcastEnsemble. This is to ensure that the same set of initializations is used by the observations to construct a persistence forecast. (GH#341) Riley X. Brady.
compute_perfect_model()now accepts initialization (
cftimeis now implemented into the bootstrap uninitialized functions for the perfect model configuration. (GH#332) Aaron Spring.
'same_verifs': slice to a common/consistent verification time frame prior to computing metric. This philosophy follows the thought that each lead should be based on the same set of verification dates. (GH#331) Riley X. Brady.
The major change for this release is a dramatic speedup in bootstrapping functions, led by
Aaron Spring. We focused on scalability with
dask and found many places we could compute
skill simultaneously over all bootstrapped ensemble members rather than at each iteration.
Properly implemented handling for lazy results when inputs are chunked.
User gets warned when chunking potentially unnecessarily and/or inefficiently.
xskillscore v0.0.15and use their functions for effective sample size-based metrics. (:pr: 353) Riley X. Brady.
climpred v2.0.0 (2020-01-22)¶
Add support for
seasonsfor lead time resolution.
climprednow requires a
leadattribute “units” to decipher what resolution the predictions are at. (GH#294) Kathy Pegion and Riley X. Brady.
>>> hind = climpred.tutorial.load_dataset('CESM-DP-SST') >>> hind.lead.attrs['units'] = 'years'
.get_observations()methods. These are the same as
.get_reference(), which will be deprecated eventually. The name change clears up confusion, since “reference” is the appropriate name for a reference forecast, e.g. persistence. (GH#310) Riley X. Brady.
.verify()function, which duplicates the
.compute_metric()function. We feel that
.verify()is more clear and easy to write, and follows the terminology of the field. (GH#310) Riley X. Brady.
climpred v1.2.1 (2020-01-07)¶
['p_pval_eff', 'pvalue_eff', 'pval_eff']for
['s_pval_eff', 'spvalue_eff', 'spval_eff']for
climpred v1.2.0 (2019-12-17)¶
>>> hind = climpred.tutorial.load_dataset('CESM-DP-SST') >>> ref = climpred.tutorial.load_dataset('ERSST') >>> hindcast = climpred.HindcastEnsemble(hind) >>> hindcast = hindcast.add_reference(ref, 'ERSST') >>> print(hindcast) <climpred.HindcastEnsemble> Initialized Ensemble: SST (init, lead, member) float64 ... ERSST: SST (time) float32 ... Uninitialized: None >>> print(hindcast.get_initialized()) <xarray.Dataset> Dimensions: (init: 64, lead: 10, member: 10) Coordinates: * lead (lead) int32 1 2 3 4 5 6 7 8 9 10 * member (member) int32 1 2 3 4 5 6 7 8 9 10 * init (init) float32 1954.0 1955.0 1956.0 1957.0 ... 2015.0 2016.0 2017.0 Data variables: SST (init, lead, member) float64 ... >>> print(hindcast.get_reference('ERSST')) <xarray.Dataset> Dimensions: (time: 61) Coordinates: * time (time) int64 1955 1956 1957 1958 1959 ... 2011 2012 2013 2014 2015 Data variables: SST (time) float32 ...
climpred v1.1.0 (2019-09-23)¶
climpred v1.0.1 (2019-07-04)¶
climpred v1.0.0 (2019-07-03)¶
climpred v1.0.0 represents the first stable release of the package. It includes
PerfectModelEnsemble objects to perform analysis with.
It offers a suite of deterministic and probabilistic metrics that are optimized to be
run on single time series or grids of data (e.g., lat, lon, and depth). Currently,
climpred only supports annual forecasts.
Documentation built extensively in multiple PRs.
climpred v0.3 (2019-04-27)¶
climpred v0.3 really represents the entire development phase leading up to the
version 1 release. This was done in collaboration between Riley X. Brady,
Aaron Spring, and Andrew Huang. Future releases will have less additions.
init: initialization dates for the prediction ensemble
lead: retrospective forecasts from prediction ensemble; returned dimension for prediction calculations
time: time dimension for control runs, references, etc.
member: ensemble member dimension.
climpred v0.2 (2019-01-11)¶
Name changed to
climpred, developed enough for basic decadal prediction tasks on a
perfect-model ensemble and reference-based ensemble.
climpred v0.1 (2018-12-20)¶
Collaboration between Riley Brady and Aaron Spring begins.