A Vision of Storage for Exascale Computing

eric

“Back in July 2012, Whamcloud was awarded the Storage and I/O Research & Development subcontract for the Department of Energy’s FastForward program. Shortly afterward, the company was acquired by Intel. Nearly completed now, the two-year contract scope includes key R&D necessary for a new object storage paradigm for HPC exascale computing, and the developed technology will also address next-generation storage mechanisms required by the Big Data market.”

GPU Acceleration Benefits for Applied CAE

Axel

“This presentation examines the HPC performance characteristics of CAE software, and the current state of GPU parallel solvers in commercial CAE that support product design in manufacturing industries. Case studies from industry will be presented that include HPC adoption of GPUs for production CAE and HPC technology and the benefits they provide. Rapid simulation from GPUs demonstrates the potential of a novel HPC technology that can transform current practices in engineering analysis and design optimization procedures.”

insideHPC Performance Guru Looks at Nvidia’s New NVLink

Bill D'Amico

“For NVLink to have its highest value it must function properly with unified memory. That means that the Memory Management Units in the CPUs have to be aware of NVLink DMA operations and update appropriate VM structures. The operating system needs to know when memory pages have been altered via NVLink DMA – and this can’t be solely the responsibility of the drivers. Tool developers also need to know details so that MPI or other communications protocols can make use of the new interconnect.”

Nvidia’s Steve Oberlin on his New Role as CTO for Accelerated Computing

oberlin

In this video from GTC 2014, Steve Oberlin from Nvidia describes his new role as Chief Technical Officer for Accelerated Computing. Along the way, he discusses: the HPC lessons learned from the CRAY T3E and other systems, Nvidia’s plans to tackle the challenges of the HPC Memory Wall, the current status on Project Denver, and how Nvidia plans to couple to the POWER architecture in future systems.

Interview: Steve Simms Re-elected as Community Representative Director for OpenSFS

Stephen Simms

“A key mission for OpenSFS to accomplish is to keep attendees up to date on where Lustre is at. In addition to the technical talks identifying Lustre’s current capabilities, departing community representative director, Tommy Minyard, will deliver an OpenSFS commissioned report from an independent organization examining the current state of Lustre.”

GTC 2014 to Reflect Latest Industry Trends

Marc

Marc Hamilton writes that the GTC conference next week will reflect some exciting industry trends in vGPU virtualization, Big Data, and ARM processing. “We have seen an explosion in the use of GPUs for machine learning and pattern recognition. Much of this is going on in Internet data centers.”

Sponsored Post: Utilization vs. Efficiency – How to Have Them Both on your HPC System

David Lecomber

“We all want maximum utilization from our HPC systems but do we really know how efficiently this time is being used? With Allinea Performance Reports, you now have the information you need without changing the source code or the application to drive smarter computing.”

A Boatload of HPC Events Coming Up

2iHr

With all the HPC events coming around the corner, we thought it would be good to revisit our Spring preview to help ensure that our readers don’t miss out. We’ll be on the road for nearly a month straight, so be sure to check back here for exclusive onsite event coverage.

Radio Free HPC Looks at Why We Need Declustered RAID

bubble

In this podcast, the Radio Free HPC team looks at the coming wave of declustered RAID solutions designed to address the problem of ever-increasing RAID rebuild times. With 5 Terabyte and larger drives in the wings, RAID cluster rebuild times are becoming impractical.

Slidecast: How Big Workflow Delivers Business Intelligence

rob_clyde

In this slidecast, Rob Clyde from Adaptive Computing describes Big Workflow — the convergence of Cloud, Big Data, and HPC in enterprise computing. “The explosion of big data, coupled with the collisions of HPC and cloud, is driving the evolution of big data analytics,” said Rob Clyde, CEO of Adaptive Computing. “A Big Workflow approach to big data not only delivers business intelligence more rapidly, accurately and cost effectively, but also provides a distinct competitive advantage.”