Creating a global dataset of standardised ocean wave parameters for climate change investigations

Construction of the COWCLIPv2 dataset to enable accurate comparison between wave climate projections

Like 0 Comment
Read more

Concerns of climate driven changes in wind-waves impacting coastal stability has led to many international modelling groups creating climate model-forced ocean wave data sets. Different model configurations by different groups (choice of dynamical or statistical wave model, input data, model physics, model grid resolutions, and output wave parameters) has made direct comparison of wave model data between modelling groups difficult, and ascertaining sources of uncertainty even more so.

COWCLIP (the Coordinated Ocean Wave CLImate Project) aims to improve wave climate information by standardising model outputs, and producing a consistent set of model statistics to make the data directly comparable while reducing the data volume from many terabytes to gigabytes. The project has been active for over 10 years, but it has taken time to build the capacity to standardise model inputs and outputs to the extent required to produce a truly interoperable dataset.

In 2019 we published the paper “Robustness and uncertainties in global multivariate wind-wave climate projections”, which drew on data created through COWCLIP, to produce the first thorough statistical analysis of wave climate projections drawing on modelling efforts from around the world. The description of this dataset has just been published in Nature Scientific Data.

Modelling groups processed their raw wave model outputs to generate a common set of monthly, seasonal and annual statistics for 3 key wave variables (height, period and direction), using post-processing code supplied to all modelling groups. Groups uploaded their contributing data to a central server, and provided metadata describing model configuration and forcing data (winds from CMIP5-era global climate models, sea ice, and bathymetry) and references.

Figure: schematic workflow from input modelling groups, through standardisation code into the final data structure ready for community consumption

Contributed data were processed to a consistent structure for each modelling group and climate projection scenario, and regridded onto a common 1°x1° spatial grid. Data is constrained to “historical” (1986-2005) and mid- and high-end emissions future (2080-2099) time-slices.

Finally, we aimed to produce an open, well-structured dataset for community use. We chose to produce netCDF files, an open self-describing format used commonly in the climate community, following a similar directory and file naming convention to that used by climate model data. The effort to produce these files was not trivial, ensuring accuracy of data writing and bearing all metadata, using robust code to produce the dataset.

Publishing the data involved some unexpected hurdles. Our chosen data repository, the Australian Ocean Data Network (AODN), did not have a history of publishing modelled data, so COWCLIPv2 represented a new data type for this portal. Additionally, the licensing associated with CMIP5 models was inconsistent, and some modelling groups used data that were “non-commercial” in their terms of use. Our objective was an open dataset, and we had to confirm with the global climate model publication community we could publish this data openly.

We thank all CMIP5 model producers and our colleagues in the contributing wave modelling groups for their efforts in producing this dataset, and hope it will be used to inform work on ocean climate projections by those who seek such information.

Go to the profile of Claire Trenham

Claire Trenham

experimental scientist, CSIRO

No comments yet.