MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Pete Beckman Presents: Exascale Architecture Trends


“Argonne National Laboratory is one of the labs helping to lead the exascale push for the nation with the DOE. We lead in a numbers of areas with software and storage systems and applied math. And we’re really focusing, our expertise is focusing on those new ideas, those novel new things that will allow us to sort of leapfrog the standard slow evolution of technology and get something further out ahead, three years, five years out ahead. And that’s where our research is focused.”

With Hazel Hen Cray XC40, HRLS Upgrades to 7.42 Petaflops

Hazel Hen, the new CRAY XC40-System of the High Performance Computing Center Stuttgart (HLRS), delivers a peak performance of 7.42 Petaflops (quadrillion floating point operations per second).

HLRS in Stuttgart, Germany has upgraded their Hornet system to Hazel Hen, a 7.42 Petaflop Cray XC40 supercomputer. Twice as fast as its predecessor, Hazel Hen is now now ready to support European scientific and industrial users in their pursuit of R&D break-throughs. “In case you’re wondering, HRLS chose the name Hazel Hen because it’s the one animal that eats Hornets.”

Maximizing HPC Compute Resources with Minimal Cost


“As HPC resource requirements continue to increase, the need for finding economical solutions to handle the rising requirements increases as well. There are numerous ways to approach this challenge. For example, leveraging existing equipment, adding new or used equipment, and handling uncommon peak usage dynamically through cloud solutions managed by a central job management system can prove to be highly available and resource rich, while remaining economical. In this presentation we will discuss how Wayne State University implemented a combination of these approaches to dramatically increase our compute resources for the equivalent cost of only a few new servers.”

Submit Your 2016 Research Allocation Requests for the Bridges Supercomputer


XSEDE is now accepting 2016 Research Allocation Requests for the Bridges supercomputer. Available starting in January, 2016 at the Pittsburgh Supercomputing Center, Bridges represents a new concept in high performance computing: a system designed to support familiar, convenient software and environments for both traditional and non-traditional HPC users.

Podcast: Supercomputing Powers Efforts to Save Ocean Coral


What can we do to help ocean coral survive Global Warming? In this TACC podcast, Jorge Salazar looks at how researchers are using the Stampede supercomputer to investigate how Corals can genetically adapt to warmer waters.

Video: HPC in the Design of Aircraft Engines

Brian Mitchell, GE

“This webinar replay discusses the use of high performance computing (HPC) in the design of aircraft jet engines and gas turbines used to generate electrical power. HPC is the critical enabler in this process, but applying HPC effectively in an industrial design setting requires an integrated hardware/software solution and a clear understanding of how the value outweighs the costs. This webinar will share GE’s perspective on the successful deployment and utilization of HPC, offer examples of HPC’s impact on GE products, and discuss future trends.”

HP and SanDisk to Team on Memory-Driven Computing


Today HP and SanDisk announced a long-term partnership to collaborate on a new technology within the Storage Class Memory (SCM) category. The partnership will center around HP’s Memristor technology and expertise and SanDisk’s non-volatile ReRAM memory technology and manufacturing and design expertise to create new enterprise-wide solutions for Memory-driven Computing. The two companies also will partner in enhancing data center solutions with SSDs.

Lustre Video: Robinhood v3 Policy Engine and Beyond


“The Robinhood Policy Engine is a versatile tool to manage contents of large file systems. It maintains a replicate of filesystem medatada in a database that can be queried at will. It makes it possible to schedule mass action on filesystem entries by defining attribute-based policies.”

Requesting Your Input on the HPC & Large Enterprise Purchase Sentiment Survey


We’d like to invite our readers to participate in our new HPC & Large Enterprise Purchase Sentiment Survey. “It’s designed to get a feel for the technology purchasing plans of HPC and large enterprise data centers. We’ll also ask some questions about how your data center is approaching new technologies, usage models, and the like. Additionally, we’d like to know how you regard major vendors in the data center space.”

Video: Is Remote GPU Virtualization Useful?


“Although the use of GPUs has generalized nowadays, including GPUs in current HPC clusters presents several drawbacks mainly related with increased costs. In this talk we present how the use of remote GPU virtualization may overcome these drawbacks while noticeably increasing the overall cluster throughput. The talk presents real throughput measurements by making use of the rCUDA remote GPU virtualization middleware.”