Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Exascale Computing – What are the Goals and the Baseline?

Thomas Schulthess presented this talk at the MVAPICH User Group. “Implementation of exascale computing will be different in that application performance is supposed to play a central role in determining the system performance, rather than just considering floating point performance of the high-performance Linpack benchmark. This immediately raises the question as to what the yardstick will be, by which we measure progress towards exascale computing. I will discuss what type of performance improvements will be needed to reach kilometer-scale global climate and weather simulations. This challenge will probably require more than exascale performance.”

Radio Free HPC Looks at Alternative Processors for High Performance Computing

In this podcast, the Radio Free HPC team looks at why it’s so difficult for new processor architectures to gain traction in HPC and the datacenter. Plus, we introduce a new regular feature for our show: The Catch of the Week.

Video: Exploring I/O Challenges at Exascale

“Clear trends in the past and current petascale systems (i.e., Jaguar and Titan) and the new generation of systems that will transition us toward exascale (i.e., Aurora and Summit) outline how concurrency and peak performance are growing dramatically, however, I/O bandwidth remains stagnant. In this talk, we explore challenges when dealing with I/O-ignorant high performance computing systems and opportunities for integrating I/O awareness in these systems.”

The Evolution of HPC

“When the history of HPC is viewed in terms of technological approaches, three epochs emerge. The most recent epoch, that of co-design systems, is new and somewhat unfamiliar to many HPC practitioners. Each epoch is defined by a fundamental shift in design, new technologies, and the economics of the day. “A network co-design model allows data algorithms to be executed more efficiently using smart interface cards and switches. As co-design approaches become more mainstream, design resources will begin to focus on specific issues and move away from optimizing general performance.”

Designing Machines Around Problems: The Co-Design Push to Exascale

A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”

New Report Looks at European Exascale Projects

“Between 2011 and 2016, eight projects, with a total budget of more than €50 Million, were selected for this first push in the direction of the next- generation supercomputer: CRESTA, DEEP and DEEP-ER, EPiGRAM, EXA2CT, Mont- Blanc (I + II) and Numexas. The challenges they addressed in their projects were manifold: innovative approaches to algorithm and application development, system software, energy efficiency, tools and hardware design took centre stage.”

Raj Hazra Presents: Driving to Exascale

Raj Hazra presented this talk at ISC 2016. “As part of the company’s launch of the Intel Xeon Phi processor, Hazra describes how how cognitive computing and HPC are going to work together. “Intel will introduce and showcase a range of new technologies helping to fuel the path to deeper insight and HPC’s next frontier. Among this year’s new products is the Intel Xeon Phi processor. Intel’s first bootable host processor is specifically designed for highly parallel workloads. It is also the first to integrate both memory and fabric technologies. A bootable x86 CPU, the Intel Xeon Phi processor offers greater scalability and is capable of handling a wider variety of workloads and configurations than accelerator products.”

White House Releases Strategic Plan for NSCI Initiative

This week the White House Office of Science and Technology Policy released the Strategic Plan for the NSCI Initiative. “The NSCI strives to establish and support a collaborative ecosystem in strategic computing that will support scientific discovery and economic drivers for the 21st century, and that will not naturally evolve from current commercial activity,” writes Altaf Carim, William Polk, and Erin Szulman from the OSTP in a blog post.

Preliminary Agenda Posted for HPC User Forum in Austin, Sept. 6-8

IDC has published the preliminary agenda for their next HPC User Forum. The event will take place Sept. 6-8 in Austin, Texas.

Intel® Xeon Phi™ Processor—Highly Parallel Computing Engine for HPC

For decades, Intel has been enabling insight and discovery through its technologies and contributions to parallel computing and High Performance Computing (HPC). Central to the company’s most recent work in HPC is a new design philosophy for clusters and supercomputers called Intel® Scalable System Framework (Intel® SSF), an approach designed to enable sustained, balanced performance as the community pushes towards the Exascale era.