Research Demands More Compute Power and Faster Storage for Complex Computational Applications

Print Friendly, PDF & Email

Sponsored Post

Katie (Garrison) Rivera, Marketing Commuincations, One Stop Solutions

Katie (Garrison) Rivera, Marketing Commuincations, One Stop Solutions

Many Universities, private research labs and government research agencies have begun using High Performance Computing (HPC) servers, compute accelerators and flash storage arrays to accelerate a wide array of research among disciplines in math, science and engineering. These labs utilize GPUs for parallel processing and flash memory for storing large datasets. Many universities have HPC labs that are available for students and researchers to share resources in order to analyze and store vast amounts of data more quickly.

 

The University of Washington’s Nuclear Physics laboratory (now CENPA) performs research in experimental physics. Research subjects include neutrino research, precision muon physics, gravitational and sub-gravitational physics, axion searches and more. One of the main goals of physics research such as these is to unite the various physics theories to better understand what our universe is made of. Physics applications generate an incredible amount of data that can be computed in parallel quickly by using thousands of compute cores provided by multiple GPUs. UW tested the OSS 3U High Density Compute Accelerator (HDCA) with 16 NVIDIA Tesla K20X GPUs with an existing server and found that it performed calculations for nuclear physics applications 6 times faster than the server by itself.

Government Research Agencies, such as NASA also utilize HPC servers, compute accelerators and flash storage arrays in their labs. NASA Goddard Space Flight Center’s High-End Computer Network (HECN) Team is utilizing two OSS 2U Compute Accelerators to demonstrate high speed disk-to-disk transfers. Each enclosure supports up to eight RAID controllers that transfer data to solid-state disk drives, achieving disk-to-disk transfer speeds that exceed 100Gb/s. These unprecedented data transfer speeds allow NASA to predict climate change and other complex modeling and simulation tasks.

2Complex modeling applications such as ocean circulation modeling, tsunami simulation, ocean modeling, computational fluid dynamics and weather and climate forecasting rely on technology to be as accurate as possible. For example, weather and climate forecasting uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Once computers were invented, realistic results were finally possible. And as technology has advanced, the forecasts have become more accurate. Global and local forecast models are run in HPC research labs all over the world and they use current weather observations received from different sources, such as weather satellites.

Weather and climate forecasting can be used to create short term weather forecasts, or long-term climate forecasts. This has been helpful for researchers to interpret and predict climate change. In regional models, improvement has been made when tracking tropical cyclones and predicting air quality. All weather and climate forecasting models are made up of huge amounts of data. Computing the vast amount of data and executing the intricate calculations required for modern weather and climate forecasting necessitates the use of powerful GPUs.

In addition to fast computation, research applications require a large amount of storage because of the sheer amount of data that needs to be analyzed and recorded. A typical server in the high performance computing industry may be 4U and have 16 internal terabyte drives. If you connect this one server to the 3U Flash Storage Array, it would then have over 200TB of memory. In order to achieve this amount of storage with servers alone, it would take over 12 of the 4U, 256GB servers (over 48U of rack space). Instead, it can be done in 7U rack space. Adding large amounts of storage in such a small footprint would allow HPC research labs to exponentially increase performance without sacrificing resources.

This guest article was Katie (Garrison) Rivera, Marketing Communications at One Stop Systems