MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Manage HPC Cluster Growth

HPC Cluster Success

Smaller clusters often overload a single server with multiple services such as file, resource scheduling, plus monitoring/management. While this approach may work for systems with fewer than 100 nodes, these services can overload the cluster network or the single server as the cluster grows. InsideHPC Guide show a plan for scalable HPC cluster growth

Interview: Xcelerit Partners with Intel to Accelerate Code in Supercomputing

Hicham Lahlou

“Our core product is the Xcelerit SDK, a Software Development Kit that makes it easy for domain specialists (i.e. mathematicians in banks or geophysicists in energy exploration firms) to convert their existing code to take advantage of multi-core, GPU and other hardware accelerators.”

GPUs Win the Day at ASC14 Student Cluster Competition


“We need to educate new legions of students in high-performance computing,” said James Lin, vice director Center for HPC at SJTU. “Teaching HPC skills in a competition like this can be more effective than in a classroom. In fact, as a result of his experience, one of our team members decided to focus on an HPC PhD at Virginia Tech.”

Video: Designing Software Libraries and Middleware for Exascale Systems

DK Panda, Ohio State University

“This talk will focus on challenges in designing software libraries and middleware for upcoming exascale systems with millions of processors and accelerators. Two kinds of application domains – Scientific Computing and Big data will be considered. For scientific computing domain, we will discuss about challenges in designing runtime environments for MPI and PGAS (UPC and OpenSHMEM) programming models by taking into account support for multi-core, high-performance networks, GPGPUs and Intel MIC. “

Embedded HPC: GE Intelligent Platforms Certifies Allinea Software


HPC is reaching out of its traditional setting in large compute clusters and into embedded systems for modern signal processing applications in defense.

CCS in Japan Installs Second Petascale Cray CS300

Today Cray announced that the Center for Computational Sciences (CCS) at the University of Tsukuba in Japan has installed their second Petascale Cray CS300 supercomputer.

HPC Cluster Building Blocks


The basic HPC cluster consists of at least one management/login node connected to a network of many worker nodes. Depending on the size of the cluster, there may be multiple management nodes used to run cluster-wide services, such as monitoring, workflow, and storage services. This insideHPC article series looks at the Five Essential Strategies for Managing HPC Clusters.

Allinea DDT Debugger Adds Support for NVIDIA CUDA 6


Today Allinea Software today announced that the company’s Allinea DDT 4.2.1 debugging software has been tailored to offer full support for NVIDIA CUDA 6

Are There Potholes on the Road to Exascale?

John Barr

“When OpenACC first appeared it made sense to use this forum to experiment with new approaches while the use of GPUs in HPC was evolving rapidly, with the expectation that the best ideas would then be reintroduced into OpenMP. But OpenMP and OpenACC now seem to be diverging. Indeed, a comparison of OpenACC and OpenMP on the OpenACC web site says “efforts so far to include support for GPUs in the OpenMP specification are — in the opinions of many involved — at best insufficient, and at worst misguided.”

Interview: Next-Generation Cori Supercomputer Coming to NERSC

Sudip Dosanjh

“We need to emphasize here that the Knights Landing processor is self-hosted, and so that means it’s not an accelerator. It’s not a coprocessor and the particular kernel processor that will be having for NERSC-8, will have more than 60 cores and it will have multiple hardware threads for the core. That’s a lot, right? Having 60 cores per node with multiple hardware thread. That a significant increase from both our Hopper and Edison system, which has 24 cores each. So we’re going to be working with our users to figure out what’s the right amount of parallelism that they need to expose in their application. That’s one really big difference.”