Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Exascale – A Race to the Future of HPC

From Megaflops to Gigaflops to Teraflops to Petaflops and soon to be Exaflops, the march in HPC is always on and moving ahead. This whitepaper details some of the technical challenges that will need to be addressed in the coming years in order to get to exascale computing.

Exascale: A race to the future of HPC

As exponential data growth reshapes the industry, engineering, and scientific discovery, success has come to depend on the ability to analyze and extract insight from incredibly large data sets. Exascale computing will allow us to process data, run systems, and solve problems at a totally new scale and this will become vitally important as problems grow ever larger, ever more difficult. Our unmatched ability to bring new technology to the mainstream will provide systems that are markedly more affordable, usable, and efficient at handling growing workloads. To learn more download this white paper.

Co-design for Data Analytics And Machine Learning

The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.

Network Co-design as a Gateway to Exascale

Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU” and provide in-network computing capabilities.

Square Kilometer Array in HPC

Next generation radio telescopes will require tremendous amounts of compute power. With the current state of the art, the Square Kilometer Array (SKA), currently entering its pre-construction phase, will require in excess of one ExaFlop/s in order to process and reduce the massive amount of data generated by the sensors. The nature of the processing involved means that conventional high performance computing (HPC) platforms are not ideally suited. Con- sequently, the Square Kilometer Array project requires active and intensive involvement from both the high performance computing research community, as well as industry, in order to make sure a suitable system is available when the telescope is built. In this paper, we present a first analysis of the processing required, and a tool that will facilitate future analysis and external involvement.

Co-Design Offloading

The move to network offloading is the first step in co-designed systems. A large amount of overhead is required to service the huge number of packets required for modern data rates. This amount of overhead can significantly reduce network performance. Offloading network processing to the network interface card helped solve this bottleneck as well as some others.

The insideHPC Guide to Co Design Architecture

The use of Co-Design and offloading are important tools in achieving Exascale computing. Application developers and system designers can take advantage of network offload and emerging co-design protocols to accelerate their current applications. Adopting some basic co-design and offloading methods to smaller scale systems can achieve more performance on less hardware resulting in low cost and higher throughput. Learn more by downloading this guide.

The Evolution of HPC

“When the history of HPC is viewed in terms of technological approaches, three epochs emerge. The most recent epoch, that of co-design systems, is new and somewhat unfamiliar to many HPC practitioners. Each epoch is defined by a fundamental shift in design, new technologies, and the economics of the day. “A network co-design model allows data algorithms to be executed more efficiently using smart interface cards and switches. As co-design approaches become more mainstream, design resources will begin to focus on specific issues and move away from optimizing general performance.”

Designing Machines Around Problems: The Co-Design Push to Exascale

A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”

Setting a Path for the Next-Generation of High-Performance Computing Architecture

At SC15, Intel talked about some transformational high-performance computing technologies and the architecture—Intel® Scalable System Framework (Intel® SSF). Intel describes Intel SSF as “an advanced architectural approach for simplifying the procurement, deployment, and management of HPC systems, while broadening the accessibility of HPC to more industries and workloads.” Intel SSF is designed to eliminate the traditional bottlenecks; the so called power, memory, storage, and I/O walls that system builders and operators have run into over the years.