Summary of some features of the ENSEMBLES new portal
From User Guide of the ENSEMBLES Downscaling Portal (version 2)
J.M. Gutierrez, D. San Martın, A.S. Coﬁno, S. Herrera, R. Manzanas
Technical Notes, Santander Meteorology Group (CSIC-UC), SMG:2.2011
1) Daily time scale
2) Station data, or gridded data
3) Predetermined one to one correspondence between predictors/predictands and downscaling methods.
4) Not possible to see what variables can be downscaled – tmax and tmin; Probably precipitation – total daily precip.
5) Extensive Validation section which includes results from cross-validation on the 25% of the available data; Validation metrics calculated on a daily scale and on a 10-day aggregated values scale:
6) Ensemble approach – 4 models, 3 SRES scenarios, 2 downscaling techniques (analogs and linear regression), downscaling performed separately for each experiment and by decade, output given as separate time series in a .csv format; no ensemble consensus metrics.
7) No Uncertainty assessment.
8) Help information – for each step – Not able to check how exhaustive this information is and how close to our ideas about translational information – the help button does not work with the demo.
9) Users to have some knowledge of downscaling and large-scale drivers of local climate, or to work closely with climate scientists.
Users: "This user guide is intended for end-users with some basic knowledge on statistical downscaling and focus on the steps to be followed to undertake a particular downscaling experiment using the downscaling portal.
We want to remark that this portal should not be used as a black-box input-output tool since, otherwise, the obtained regional projections could be misleading or even wrong. Therefore, some background knowledge about the meteorological conditions in the area of interest and the main large scale drivers inﬂuencing the climate is needed to appropriately use the downscaling tool and to obtain meaningful results. Moreover, the results obtained from the ENSEMBLES Downscaling Portal should not be directly used in impact applications without the necessary knowledge about the assumptions and limitations of this methodology.
Thus, we strongly advise end-users to work in collaboration with downscaling groups, or at least have some support from them, in order to deﬁne the experiments and to appropriately analyze and use the results."
Demo: As an illustrative example, the portal includes a “demo” experiment Iberia demo, which focuses on maximum temperature in ﬁve locations/cities for the 2091-2100 decade.
GCM data sets – control (20c3m, 1961-2000) and future (B1, A1b and A2, for 2001-2100). Daily data from BCM2, CNCM3, MPEH5 (ENSEMBLES stream1) and HADGEM2 (ENSEMBLES Stream2). These models data have been validated at a daily basis for the different upper-level fields included as predictors in the portal.
Reanalysis data sets - ERA 40 ECMWF and NCEP/NCAR Reanalysis.
Used as predictors data sets during the development stage.
Observed data sets (station data) – Global Stations Network (GSN) and Global Summary Of the Day (GSOD); the user can include new observational datasets into the portal. (will be available with the new version of the portal)
"The skill of the downscaling methods depends on the variable, season and region of interest, with the latter variation dominating. Thus, for each particular application and case study, an ensemble of statistical downscaling methods needs to be tested and validated to achieve the maximum skill and a proper representation of uncertainties. Validation is a key issue in the ENSEMBLES downscaling portal and it is automatically performed when a downscaling method is deﬁned."
Structure of the portal: The portal has been organized in different windows (tabs). (1-3) correspond to the calibration/validation of a particular downscaling method, whereas (4) corresponds to the actual downscaling process, applying the calibrated method to different GCMs and scenarios:
(1) Predictor data sets – In order to manage a homogenous basic set of parameters for the different GCM outputs (reanalysis and climate change projections) a dataset of commonly-used predictor variables at a daily basis has been defined:
|2m Temperature (2T)||surface||00|
|V velocity (V)||850,700,500,300||00|
|U velocity (U)||850,700,500,300||00|
|Specific humidity (Q)||850,700,500,300||00|
|Relative Vorticity (VO)||850,700,500,300||00|
A common 2.5degx2.5deg grid is used, interpolation from native grids by standard bilinear interpolation.
Since the GCM models to be used later for downscaling may lack some of these predictors, a panel (7) indicates the GCMs (among those available for the user) compatible with the selected set of predictors, i.e. the models with the scenario data required to be downscaled within the current predictor dataset (i.e. within the current experiment).
(2) Predictand - Each of the predictands may have one or several associated downscaling methods
(3) Downscaling Method – Analog methods (default) or Linear regression methods; Perfect Prognosis approach (when the systematic model error is not taken into account).
In the future MOS – like approaches will be included as well.
- In the default configuration, only the ﬁrst ﬁve PCs of the predictors are considered for downscaling; the user can modify this number including an arbitrary number of PCs or an arbitrary number of neighbor grid-boxes; in this last case the raw model output values will be used as predictors and the spatial coherence of the method will be lost, since different local points will have different downscaling models, with different predictors (the same variables, but over different grid points).
Every new downscaling method is automatically validated by the portal.
Validation – Training subset (75%) and testing subset (25%) of the data. Statistics: mean STDev, Pearson correlation, MAE and RMSE and reliability metrics (bias, coefficient of determination, the PDF and Kolmogorov-Smirnov scores) on a daily and 10-day aggregate basis. For precipitation – ratio of observed and predicted non-precipitation frequencies and False Alarm Rate (FAR) – probability that rain is forecasted given that it was not observed.
By clicking on any of the score labels a pop-up window will appear. From it, the user can choose which scores (columns) to visualize, the ascending/descending ranking of the stations; moreover, there is the possibility to display the spatial distribution of the score on the right hand side map.
(4) Downscale – downscaling jobs run for subsequent decadal periods separately.
"The account’s restrictions will determine the maximum number of cells that can be selected/submitted simultaneously. For instance, users with a basic proﬁle (i.e. those not involved in the supporting projects or institutions) can only run two jobs simultaneously, which include the creation of predictor, predictand (with the basic downscaling method), or downscaling method, as well as the downscaling jobs.
We strongly advise the users to ﬁrst downscale, download and analyze a single decade before performing more exhaustive downscaling tasks, as we did in the Iberia demo experiment (the 2091-2100 decade for the ECHAM5 A1B scenario)."
Output – downscaled time series - available in .csv format.
Jobs performed with the portal: Choice of predictors, validation of statistical relationship, downscaling, run in parallel by the portal through a queue of computational resources, which allows to handle and monitor simultaneously several requests.
1) The information about the account details, including the restrictions holding on the resources (number of simultaneous jobs, etc.), can be consulted at any time in the “My Account” tab in the upper-right corner of the window. It also gives information about the databases available for the current user.
2) Each particular experiment (shown in the “Experiment manager” panel from MyHome” window) is based on a single predictor dataset deﬁned from reanalysis data over a particular region with a particular resolution. Therefore, a one-to-one correspondence is established in the portal between an “experiment” and the particular predictor dataset used. Note that this restriction could be problematic for a friendly use of the portal, since running a downscaling method for a given predictand with different predictors would imply deﬁning a new experiment. However, the ﬂexibility to freely combine predictors, predictands and downscaling techniques lead to data-compatibility problems which can not be solved in a user-friendly form. This restriction may change in future versions of the portal, if the development team ﬁnd out a solution to overcome these problems.
3) According to the restrictions of the user’s account, there is a maximum of stations/points that can be selected for a particular predictand. For instance, users with a basic proﬁle (i.e. those not involved in the supporting projects or institutions) can only select ﬁve stations.
Additional notes from Joe Barsugli:
What is missing (IMHO):
It sounds like the method for a user to upload their "proprietary" dataset requires them to have the Portal developers add the dataset into their system. This could be a barrier to use (but also could require users to engage with the downscaling experts....)
Note on the use of this system: Currently, the system seems designed primarily for Antonio's team to do the downscaling for a "client". He mentioned the use with EDF (electricite de France) in a seasonal forecasting application. In that case, the users in EDF were already knowledgeable about seasonal forecasts, while Anotonio's group provided downscaling expertise. All in all, it sounds like a "consultant-client" relationship, and sometimes very much like a lot of RISA work on downscaling.