Supercomputing Wind Farms at RES

Print Friendly, PDF & Email

resoffshore-brochureOCF reports that Renewable Energy Systems Group [RES] is using a new high performance cluster and big data storage system to support the design, build, and operation of wind farms globally. The system went live in March 2014.

We had a cluster before which was very helpful and very successful in supporting our work,” says Clément Bouscasse, Forecasting and Flow Modeling Manager, RES Group. “As our business grew, we found ourselves using the cluster more and more. After 4 years, we outgrew it. We chose Cray’s hardware and, in turn, Cray directed us to OCF for the design and build. Due to power and cooling constraints in our main site, we worked with OCF to build the cluster in an off-site data centre.”

The cluster, which is seven times more powerful than its predecessor, is enabling analysts to more efficiently undertake work such as wind resource mapping [for turbine placement], historical time series generation, project design refinement and wind turbulence modeling, using CFD. The cluster uses Cray Intel servers designed, installed and maintained by high performance data processing, management, storage and analytics provider OCF.

The technical team at RES can also now create longer mesoscale simulations, for example reviewing 10-years’ worth of wind data for a specific local area, as opposed to just 1-year’s worth of data using the old cluster. This improves the accurate placement of wind turbines and enables more detailed energy yield analysis. The team can also experiment with more complex CFD models. Previously, picking the days that would act as a long-term representation of conditions, they are now able to model all environmental elements, making fewer assumptions and making more accurate decisions.

The cluster is supported by 128TBs of high capacity, big data storage, built using Boston SATA storage connected via 10GbE to the main data network.

In our new high performance storage system we can store the files analysts need access to on a daily basis,” said Clement. “We generate a lot of data – but fortunately after processing, file sizes reduce from potentially Terabytes [TBs] to maybe only a few hundred Megabytes. We’re making every effort to reduce the size of files we produce and trying to be extra efficient in the post process. Despite this, we’ve already generated a few TBs in just a couple of months.”

RES also has access to x2 IBM Ultrium 6-drive Autoloader tape libraries, which it is using for data backup purposes. Clément continues: “In the first instance, we’re using the tape library as a back-up system. In the future, we hope to also use it as an efficient archiving solution, which is key for us since the cluster is hosted off site. This should limit the number of required visits to the data centre.”

Sign up for our insideHPC Newsletter.