Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Cycle Computing Works with Dell to Deliver More Science for More Users

In this this video from ISC 2016, Tim Carroll describes how Cycle Computing is working with Dell Technologies to deliver more science for more users. Cycle Computing’s CycleCloud software suite is the leading cloud orchestration, provisioning, and data management platform for Big Compute, Big Data, and large technical computing applications running on any public, private, or internal environment.

Slidecast: For AMD, It’s Time to ROCm!

“AMD has been away from the HPC space for a while, but now they are coming back in a big way with an open software approach to GPU computing. The Radeon Open Compute Platform (ROCm) was born from the Boltzmann Initiative announced last year at SC15. Now available on GitHub, the ROCm Platform bringing a rich foundation to advanced computing by better integrating the CPU and GPU to solve real-world problems.”

Co-design for Data Analytics And Machine Learning

The big data analytics market has seen rapid growth in recent years. Part of this trend includes the increased use of machine learning (Deep Learning) technologies. Indeed, machine learning speed has been drastically increased though the use of GPU accelerators. The issues facing the HPC market are similar to the analytics market — efficient use of the underlying hardware. A position paper from the third annual Big Data and Extreme Computing conference (2015) illustrates the power of co-design in the analytics market.

The Future of HPC Application Management in a Post Cloud World

The prevalency of cloud computing has changed the HPC landscape necessaiting HPC management tools that can manage and simplify complex enviornments in order to optimize flexibility and speed. Altair’s new solution PBS Cloud Manager makes it easy to build and manage HPC application stacks.

Network Co-design as a Gateway to Exascale

Achieving better scalability and performance at Exascale will require full data reach. Without this capability, onload architectures force all data to move to the CPU before allowing any analysis. The ability to analyze data everywhere means that every active component in the cluster will contribute to the computing capabilities and boost performance. In effect, the interconnect will become its own “CPU” and provide in-network computing capabilities.

Facilitate HPC Deployments with Reference Designs for Intel Scalable System Framework

With Intel Scalable System Framework Architecture Specification and Reference Designs, the company is making it easier to accelerate the time to discovery through high-performance computing. The Reference Architectures (RAs) and Reference Designs take Intel Scalable System Framework to the next step—deploying it in ways that will allow users to confidently run their workloads and allow system builders to innovate and differentiate designs

Co-Design Offloading

The move to network offloading is the first step in co-designed systems. A large amount of overhead is required to service the huge number of packets required for modern data rates. This amount of overhead can significantly reduce network performance. Offloading network processing to the network interface card helped solve this bottleneck as well as some others.

The Evolution of HPC

“When the history of HPC is viewed in terms of technological approaches, three epochs emerge. The most recent epoch, that of co-design systems, is new and somewhat unfamiliar to many HPC practitioners. Each epoch is defined by a fundamental shift in design, new technologies, and the economics of the day. “A network co-design model allows data algorithms to be executed more efficiently using smart interface cards and switches. As co-design approaches become more mainstream, design resources will begin to focus on specific issues and move away from optimizing general performance.”

Designing Machines Around Problems: The Co-Design Push to Exascale

A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”

Making it Easy to Introduce Liquid Cooling to the Data Center

With the release of high wattage processors liquid cooling is becoming a necessity for HPC data centers. Liquid cooling’s ability to provide the direct removal of heat from these high wattage components within the servers is well established. However, there are sometimes concerns from facilities management that need to be addressed prior to liquid cooling’s introduction to the data center.