Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Co-design for Data Analytics And Machine Learning

The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.

Network Co-design as a Gateway to Exascale

Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU” and provide in-network computing capabilities.

Co-Design Offloading

The move to network offloading is the first step in co-designed systems. A large amount of overhead is required to service the huge number of packets required for modern data rates. This amount of overhead can significantly reduce network performance. Offloading network processing to the network interface card helped solve this bottleneck as well as some others.

The Evolution of HPC

“When the history of HPC is viewed in terms of technological approaches, three epochs emerge. The most recent epoch, that of co-design systems, is new and somewhat unfamiliar to many HPC practitioners. Each epoch is defined by a fundamental shift in design, new technologies, and the economics of the day. “A network co-design model allows data algorithms to be executed more efficiently using smart interface cards and switches. As co-design approaches become more mainstream, design resources will begin to focus on specific issues and move away from optimizing general performance.”

Designing Machines Around Problems: The Co-Design Push to Exascale

A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”

InsideHPC Guide to Technical Computing

Today’s High Performance Computing (HPC) systems offer the ability to model everything from proteins to galaxies. The insights and discoveries offered by these systems are nothing short of astounding. Indeed, the ability to process, move, and store data at unprecedented levels, often reducing jobs from weeks to hours, continues to move science and technology forward at an accelerating pace. This article series offers those considering HPC, both users and managers, guidance when considering the best way to deploy an HPC solution.

Three Questions to Ensure Your HPC Success

Successful HPC computing depends on choosing the architecture that addresses both application and institutional needs. In particular, finding a simple path to leading edge HPC and Data Analytics is not difficult, if you consider the capabilities and limitations of various approaches to HPC performance, scaling, ease of use, and time to solution. Careful analysis and consideration of the following questions will help lead to a successful and cost-effective HPC solution. Here are three questions to ask to ensure HPC success.

Local or Cloud HPC?

Cloud computing has become another tool for the HPC practitioner. For some organizations, the ability of cloud computing to shift costs from capital to operating expenses is very attractive. Because all cloud solutions require use of the Internet, a basic analysis of data origins and destinations is needed. Here’s an overview of when local or cloud HPC make the most sense.

Understanding Your HPC Application Needs

Many HPC applications began as single processor (single core) programs. If these applications take too long on a single core or need more memory than is available, they need to be modified so they can run on scalable systems. Fortunately, many of the important (and most used) HPC applications are already available for scalable systems. Not all applications require large numbers of cores for effective performance, while others are highly scalable. Here is how to better understand your HPC application needs.

Who Is Using HPC (and Why)?

In today’s highly competitive world, High Performance Computing (HPC) is a game changer. Though not as splashy as many other computing trends, the HPC market has continued to show steady growth and success over the last several decades. Market forecaster IDC expects the overall HPC market to hit $31 billion by 2019 while riding an 8.3% CAGR. The HPC market cuts across many sectors including academic, government, and industry. Learn which industries are using HPC and why.