Studying both long term climate changes and shorter term weather forecasting is very demanding computationally, typically requiring high end HPC systems. Typically these days these would be large distributed memory clusters, made up of thousands of nodes and hundreds of thousands of cores, running MPI with Fortran and C. You will find these systems scattered around the globe in large operational weather centers – a few located in the US, three or four in Europe and a half dozen in Japan and Asia. Most countries use the data generated by these centers to prepare their regional forecasts. For example, one major weather center in the UK serves over 30 European countries with their regional weather forecasting operations.
This article is part of a series on HPC’s impact on weather forecasting and climate research.
The large operational weather centers must meet all the requirements of a mission-critical organization. They need to guarantee daily forecasts to perform in a narrow window of time. There is no margin for repeat runs – the center has to have latest data when starting each day’s run. The large centers have dual systems and plenty of built-in redundancies to assure this 100% availability every day. Inevitably demand is increasing for longer forecasts – from next day to up to 10 days and more – adding to the computational burden.
The nature of the computations is built on interaction between points on the grid that represents the space (the globe, the region) being simulated, and stepping the calculated variables in time. It turns out that in today’s HPC technology, the moving of data in and out of the processing units is more demanding in time than the computations performed. To be effective, systems working with weather forecasting and climate modeling require high memory bandwidth and fast interconnect across the system, as well as a robust parallel file system.
Systems dedicated to climate research do not have the stringent operational requirements facing weather centers. They do need to have solid software that can recover from a component failure since the runs are so long; and they do need to manage and analyze large amounts of data that was produced by multi-year simulations. And smaller centers using scaled down systems, still need robust file systems, high memory bandwidth, and overall a well balanced system.
Leading HPC OEMs are introducing new technologies to speed up processing, and presenting new opportunities in vectorization and parallelism. Standalone, multicore products will play a major role in climate and weather forecasting. Robust, parallel file systems like Lustre and GPFS are a major component of the whole system. Their role is to store initial data, and the large amounts of data generated each time step, serving the processing units with input data and taking in the output as it is generated.
The storage and file systems are essential to deal with the “data problem” of weather forecasting and climate modeling. It is two-fold. The first, relatively short-lived, but time-critical, data collected from multiple sources, corresponds to numerous physical, chemical, and biological properties, that is used as input into the models. It comes from the Earth’s surface through many different instruments, from under the ocean surface, and from sensors, and satellites. It is not all collected at the same instant. Before running a model the data needs to be “assimilated” – extrapolated in time and space to match the mathematical grid points used to approximate the area over which the simulation is run. The second data source is the detailed numerical output that corresponds to each time step of many models runs and/or a model that simulated hundreds of years of evolving climate.
Meeting the Challenge
Data accumulation is just one of the challenges facing today weather and climatology researchers and scientists. To understand and predict Earth’s weather and climate, they rely on increasingly complex computer models and simulations based on a constantly growing body of data from around the globe.
Some of todays largest and most sophisticated computer hardware and software are used to predict our weather and investigate climate changes. They do exceptionally better job of it than was possible only a few year ago.
That said, addressing the impacts of climate change and better assisting society in confronting adverse weather phenomena requires ever greater computational solutions. This challenge is awaiting the arrival of the anticipated exascale systems.
To read the full insideHPC Guide to Guide to Weather Forecasting and Climate Research please can download the complete report from the insideHPC White Paper Library courtesy of SGI and Intel.