Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


NASA Spins Up High Performance Computing Challenge

Today NASA announced a code-speedup contest called the High Performance Fast Computing Challenge (HPFCC). The competition will reward qualified contenders who can manipulate the agency’s FUN3D design software so it runs ten to 10,000 times faster on the Pleiades supercomputer without any decrease in accuracy. “This is the ultimate ‘geek’ dream assignment,” said Doug Rohn, director of NASA’s Transformative Aeronautics Concepts Program (TACP). “Helping NASA speed up its software to help advance our aviation research is a win-win for all.”

NASA Boosts Pleiades Supercomputer with Broadwell CPUs and LTO Tape

NASA Ames reports that SGI has completed an important upgrade to Pleiades supercomputer. “As of July 1, 2016, all of the remaining racks of Intel Xeon X5670 (Westmere) processors were removed from Pleiades to make room for an additional 14 Intel Xeon E5-2680v4 (Broadwell) racks, doubling the number of Broadwell nodes to 2,016 and increasing the system’s theoretical peak performance to 7.25 petaflops. Pleiades now has a total of 246,048 CPU cores across 161 racks containing four different Intel Xeon processor types, and provides users with more than 900 terabytes of memory.”

Video: Discovering the Origin of Stars Through 3D Visualization

This visualization from David Ellsworth and Tim Sandstrom at NASA/AMES shows the evolution of a giant molecular cloud over 700,000 years. It ran on the Pleiades supercomputer using the ORION2 code developed at the University of California, Berkeley. It depicts how gravitational collapse leads to the formation of an infrared dark cloud (IRDC) filament in which protostars begin to develop, shown by the bright orange luminosity along the main and surrounding filaments.

Long Live the King – The Complicated Business of Upgrading Legacy HPC Systems

“Upgrading legacy HPC systems relies as much on the requirements of the user base as it does on the budget of the institution buying the system. There is a gamut of technology and deployment methods to choose from, and the picture is further complicated by infrastructure such as cooling equipment, storage, networking – all of which must fit into the available space. However, in most cases it is the requirements of the codes and applications being run on the system that ultimately define choice of architecture when upgrading a legacy system. In the most extreme cases, these requirements can restrict the available technology, effectively locking a HPC center into a single technology, or restricting the application of new architectures because of the added complexity associated with code modernization, or porting existing codes to new technology platforms.”