Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Why the OS is So Important when Running HPC Applications

This is the fourth entry in an insideHPC series that explores the HPC transition to the cloud, and what your business needs to know about this evolution. This series, compiled in a complete Guide available, covers cloud computing for HPC, why the OS is important when running HPC applications, OpenStack fundamentals and more.

Why IMC is Right for Today’s Fast-Data and Big-Data Applications

Many more companies are turning to in-memory computing (IMC) as they struggle to analyze and process increasingly large amounts of data. That said, it’s often hard to make sense of the growing world of IMC products and solutions. A recent white paper from GridGain aims to help businesses decide which solution best matches their specific needs.

Cloud Computing Continues to Influence HPC

Cloud technologies are influencing HPC just as it is the rest of enterprise IT. This is the second entry in an insideHPC series that explores the HPC transition to the cloud, and what your business needs to know about this evolution. This series, compiled in a complete Guide available, covers cloud computing for HPC, industry examples, IaaS components, OpenStack fundamentals and more.

In Memory Computing Speeds Results

In-Memory Computing can accelerate traditional applications by using a memory first design. Applicable to a wide range of domains, In-Memory Computing and In-Memory Data Grids take advantage of the latest trends in computer systems technology. “In-memory computing is designed to address some of the most critical and real-time task requirements today. This include real-time fraud detection, biometrics and border security and financial risk analytics. All of these use cases require very low latency access to data from very large amounts of data, which results in faster and more accurate decisions.”

Five Ways Scale-Up Systems Save Money and Improve TCO

The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.

Speeding Workloads at the Dell EMC HPC Innovation Lab

The Dell EMC HPC Innovation Lab, substantially powered by Intel, has been established to provide customers best practices for configuring and tuning systems and their applications for optimal performance and efficiency through blogs, whitepapers and other resources. “Dell is utilizing the lab’s world-class Infrastructure to characterize performance behavior and to test and validate upcoming technologies.”

Scaling Software for In-Memory Computing

“The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.”

Selecting HPC Network Technology

“With three primary network technology options widely available, each with advantages and disadvantages in specific workload scenarios, the choice of solution partner that can deliver the full range of choices together with the expertise and support to match technology solution to business requirement becomes paramount.”

Scaling Hardware for In-Memory Computing

The two methods of scaling processors are based on the method used to scale the memory architecture and are called scaling-out or scale-up. Beyond the basic processor/memory architecture, accelerators and parallel file systems are also used to provide scalable performance. “High performance scale-up designs for scaling hardware require that programs have concurrent sections that can be distributed over multiple processors. Unlike the distributed memory systems described below, there is no need to copy data from system to system because all the memory is globally usable by all processors.”

HPC Networking Trends in the TOP500

The TOP500 list is a very good proxy for how different interconnect technologies are being adopted for the most demanding workloads, which is a useful leading indicator for enterprise adoption. The essential takeaway is that the world’s leading and most esoteric systems are currently dominated by vendor specific technologies. The Open Fabrics Alliance (OFA) will be increasingly important in the coming years as a forum to bring together the leading high performance interconnect vendors and technologies to deliver a unified, cross-platform, transport-independent software stack.