Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Azure High Performance Computing

“Run your Windows and Linux HPC applications using high performance A8 and A9 compute instances on Azure, and take advantage of a backend network with MPI latency under 3 microseconds and non-blocking 32 Gbps throughput. This backend network includes remote direct memory access (RDMA) technology on Windows and Linux that enables parallel applications to scale to thousands of cores. Azure provides you with high memory and HPC-class CPUs to help you get results fast. Scale up and down based upon what you need and pay only for what you use to reduce costs.”

Parallel Multiway Methods for Compression of Massive Data and Other Applications

Tamara Kolda from Sandia gave this Invited Talk at SC16. “Scientists are drowning in data. The scientific data produced by high-fidelity simulations and high-precision experiments are far too massive to store. For instance, a modest simulation on a 3D grid with 500 grid points per dimension, tracking 100 variables for 100 time steps yields 5TB of data. Working with this massive data is unwieldy and it may not be retained for future analysis or comparison. Data compression is a necessity, but there are surprisingly few options available for scientific data.”

Dell & Intel Collaborate on CryoEM on Intel Xeon Phi

In this video from SC16, Janet Morss from Dell EMC and Hugo Saleh from Intel discuss how the two companies collaborated on accelerating CryoEM. “Cryo-EM allows molecular samples to be studied in near-native states and down to nearly atomic resolutions. Studying the 3D structure of these biological specimens can lead to new insights into their functioning and interactions, especially with proteins and nucleic acids, and allows structural biologists to examine how alterations in their structures affect their functions. This information can be used in system biology research to understand the cell signaling network which is part of a complex communication system.”

Co-Design 3.0 – Configurable Extreme Computing, Leveraging Moore’s Law for Real Applications

Sadasivan Shankar gave this Invited Talk at SC16. “This talk will explore six different trends all of which are associated with some form of scaling and how they could enable an exciting world in which we co-design a platform dependent on the applications. I will make the case that this form of “personalization of computation” is achievable and is necessary for applications of today and tomorrow.”

Video: How AI can bring on a second Industrial Revolution

“The AI is going to flow across the grid — the cloud — in the same way electricity did. So everything that we had electrified, we’re now going to cognify. And I owe it to Jeff, then, that the formula for the next 10,000 start-ups is very, very simple, which is to take x and add AI. That is the formula, that’s what we’re going to be doing. And that is the way in which we’re going to make this second Industrial Revolution. And by the way — right now, this minute, you can log on to Google and you can purchase AI for six cents, 100 hits. That’s available right now.”

Video: Advances and Challenges in Wildland Fire Monitoring and Prediction

Janice Coen from NCAR gave this Invited Talk at SC16. “The past two decades have seen the infusion of technology that has transformed the understanding, observation, and prediction of wildland fires and their behavior, as well as provided a much greater appreciation of its frequency, occurrence, and attribution in a global context. This talk will highlight current research in integrated weather – wildland fire computational modeling, fire detection and observation, and their application to understanding and prediction.”

SAGE Project Looks to Percipient Storage for Exascale

“The SAGE project, which incorporates research and innovation in hardware and enabling software, will significantly improve the performance of data I/O and enable computation and analysis to be performed more locally to data wherever it resides in the architecture, drastically minimizing data movements between compute and data storage infrastructures. With a seamless view of data throughout the platform, incorporating multiple tiers of storage from memory to disk to long-term archive, it will enable API’s and programming models to easily use such a platform to efficiently utilize the most appropriate data analytics techniques suited to the problem space.”

Thomas Sterling Presents: HPC Runtime System Software for Asynchronous Multi-Tasking

Thomas Sterling presented this Invited Talk at SC16. “Increasing sophistication of application program domains combined with expanding scale and complexity of HPC system structures is driving innovation in computing to address sources of performance degradation. This presentation will provide a comprehensive review of driving challenges, strategies, examples of existing runtime systems, and experiences. One important consideration is the possible future role of advances in computer architecture to accelerate the likely mechanisms embodied within typical runtimes. The talk will conclude with suggestions of future paths and work to advance this possible strategy.”

Video: The Materials Project – A Google of Materials

“The Materials Project is harnessing the power of supercomputing together with state-of-the-art quantum mechanical theory to compute the properties of all known inorganic materials and beyond, design novel materials and offer the data for free to the community together with online analysis and design algorithms. The current release contains data derived from quantum mechanical calculations for over 60,000 materials and millions of properties.”

Memory Bandwidth and System Balance in HPC Systems

“This talk reviews the history of the changing balances between computation, memory latency, and memory bandwidth in deployed HPC systems, then discusses how the underlying technology changes led to these market shifts. Key metrics are the exponentially increasing relative performance cost of memory accesses and the massive increases in concurrency that are required to obtain increased memory throughput. New technologies (such as stacked DRAM) allow more pin bandwidth per package, but do not address the architectural issues that make high memory bandwidth expensive to support.”