Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Exascale – A Race to the Future of HPC

From Megaflops to Gigaflops to Teraflops to Petaflops and soon to be Exaflops, the march in HPC is always on and moving ahead. This whitepaper details some of the technical challenges that will need to be addressed in the coming years in order to get to exascale computing.

Next Generation Sequencing

With a massive surge in genomics research, the ability to quickly process very large amounts of data is now required for any organization that is involved in genomics. While the cost has been reduced significantly, the amount of data that is produced is has increased as well. This article describes next generation sequencing and how a combination of hardware and innovative software can decrease the amount of time to sequence genomes.

Co-design for Data Analytics And Machine Learning

The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.

Network Co-design as a Gateway to Exascale

Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU” and provide in-network computing capabilities.

Co-Design Offloading

The move to network offloading is the first step in co-designed systems. A large amount of overhead is required to service the huge number of packets required for modern data rates. This amount of overhead can significantly reduce network performance. Offloading network processing to the network interface card helped solve this bottleneck as well as some others.

The Evolution of HPC

“When the history of HPC is viewed in terms of technological approaches, three epochs emerge. The most recent epoch, that of co-design systems, is new and somewhat unfamiliar to many HPC practitioners. Each epoch is defined by a fundamental shift in design, new technologies, and the economics of the day. “A network co-design model allows data algorithms to be executed more efficiently using smart interface cards and switches. As co-design approaches become more mainstream, design resources will begin to focus on specific issues and move away from optimizing general performance.”

Designing Machines Around Problems: The Co-Design Push to Exascale

A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”

Faster and More Accurate Exploration using Shared Storage with Parallel Access

This very interesting whitepaper explains how selecting a proper parallel file system for your application can increase the performance of complex simulations and reduce time to completion.

Best In Class HPC Cloud Solutions

Cloud computing is growing and replacing many data centers for High Performance Computing (HPC) applications. However, the movement towards using a cloud infrastructure is not without challenges. This whitepaper discusses many of the challenges in moving from an on-premise HPC solution to using an HPC Cloud Solution.

Successful Implementations That Demand Flexible HPC

Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.