Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


HPC4energy: The use of Exascale Computers in Oil & Gas, Wind Energy and Biogas Combustion Industries

New energy sources, if untapped, might become crucial in the mid-term. Intensive numerical simulations and prototyping are needed to assess their real value and improve their throughput. The impact of exascale HPC and data intensive algorithms in the energy industry is initially established in the U.S. Department of Energy (DOE) document “Synergistic Challenges in Data-Intensive Science and Exascale Computing.”

Energy Efficiency in the Software Stack — Cross Community Efforts

This report covers events at SC17 that focused on energy efficiency and highlights ongoing collaborations across the community to develop advanced software technologies for system energy and power management.

Exascale to Enable Smart Cities

Over at Argonne, Charlie Catlett describes how the advent of Exascale computing will enable Smart Cities designed to improve the quality of life for urban dwellers. Catlett will moderate a panel discussion on Smart Cities at the SC17 Plenary session, which kicks off the conference on Monday, Nov. 13 in Denver.

Video: Silicon Photonics for Extreme Computing

Keren Bergman from Columbia University gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. “Exaflop machines would represent a thousand-fold improvement over the current standard, the petaflop machines that first came on line in 2008. But while exaflop computers already appear on funders’ technology roadmaps, making the exaflop leap on the short timescales of those roadmaps constitutes a formidable challenge.”

A Vision for Exascale: Simulation, Data and Learning

Rick Stevens gave this talk at the recent ATPESC training program. “The ATPESC program provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future. As a bridge to that future, this two-week program fills the gap that exists in the training computational scientists typically receive through formal education or other shorter courses.”

Exascale Computing: A Race to the Future of HPC

In this week’s Sponsored Post, Nicolas Dube of Hewlett Packard Enterprise outlines the future of HPC and the role and challenges of exascale computing in this evolution. The HPE approach to exascale is geared to breaking the dependencies that come with outdated protocols. Exascale computing will allow users to process data, run systems, and solve problems at a totally new scale, which will become increasingly important as the world’s problems grow ever larger and more complex.

Fast Networking for Next Generation Systems

“The Intel Omni-Path Architecture is an example of a networking system that has been designed for the Exascale era. There are many features that will enable this massive scaling of compute resources. Features and functionality are designed in at both the host and the fabric levels. This enables very large scaling when all of the components are designed together. Increased reliability is a result of integrating the CPU and fabric, which will be critical as the number of nodes expands well beyond any system in operation today. In addition, tools and software that have been designed to be installed and managed at the very large number of compute nodes that will be necessary to achieve this next level of performance.”

Exascale – A Race to the Future of HPC

From Megaflops to Gigaflops to Teraflops to Petaflops and soon to be Exaflops, the march in HPC is always on and moving ahead. This whitepaper details some of the technical challenges that will need to be addressed in the coming years in order to get to exascale computing.

Exascale: A race to the future of HPC

As exponential data growth reshapes the industry, engineering, and scientific discovery, success has come to depend on the ability to analyze and extract insight from incredibly large data sets. Exascale computing will allow us to process data, run systems, and solve problems at a totally new scale and this will become vitally important as problems grow ever larger, ever more difficult. Our unmatched ability to bring new technology to the mainstream will provide systems that are markedly more affordable, usable, and efficient at handling growing workloads. To learn more download this white paper.

Co-design for Data Analytics And Machine Learning

The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.