“The notion of High Performance Computing is evolving over time. So what was deemed a leadership class computer five years ago is a little bit obsolete. We are talking about the evolution not only in the hardware but also in the programming models because there are more and more cores available. Orchestrating the calculations in the way that can effectively take advantage of parallelism takes a lot of thinking and a lot of redesign of the algorithms behind the calculations.”
In this video from SC14, Ken Claffey from Seagate describes how the company is doubling down on high performance computing with its ClusterStor technology. Full Transcript: Ken Claffey: Hey Rich. Good to see you again. insideHPC: Oh, yeah. It’s always a pleasure. You Xyratex guys are like family. Ken Claffey: Another year rolls around, it’s […]
In this video, Satoshi Matsuoka, professor at Tokyo Institute of Technology, examines GPU’s role in the rapidly increasing data volume and processing requirements of so-called big data. Conventional cloud infrastructures will no longer be efficient. Will GPUs play a central role, or will they be peripheral?
The Southern California Earthquake Center (SCEC), using the power of the petascale Blue Waters Supercomputer at the National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, has developed a physics-based model called CyberShake that simulates how an earthquake works rather than approximating the tremors based on observations.
In this video from the Nvidia booth at SC14, Terri Quinn from LLNL presents: A Livermore Perspective on Next-Generation Computing. “Terri is responsible for an organization consisting of three divisions with over 400 technical staff working in high-performance computing, computer security, and enterprise computing. Livermore Computing (LC), LLNL’s high performance computing organization, operates some of the most advanced production classified and unclassified computing environments.”