Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Introduction to Parallel Programming with OpenACC

“This is the first in a series of short videos to introduce you to parallel programming with OpenACC and the PGI compilers, using C++ or Fortran. You will learn by example how to build a simple example program, how to add OpenACC directives, and to rebuild the program for parallel execution on a multicore system. To get the most out of this video, you should download the example programs and follow along on your workstation.”

HPC Workflows Using Containers

“In this talk we will discuss a workflow for building and testing Docker containers and their deployment on an HPC system using Shifter. Docker is widely used by developers as a powerful tool for standardizing the packaging of applications across multiple environments, which greatly eases the porting efforts. On the other hand, Shifter provides a container runtime that has been specifically built to fit the needs of HPC. We will briefly introduce these tools while discussing the advantages of using these technologies to fulfill the needs of specific workflows for HPC, e.g., security, high-performance, portability and parallel scalability.”

PSSC Labs Updates CBeST Cluster Management Software

Today PSSC Labs announced it has refreshed its CBeST (Complete Beowulf Software Toolkit) cluster management package. CBeST is already a proven platform deployed on over 2200 PowerWulf Clusters to date and with this refresh PSSC Labs is adding a host of new features and upgrades to ensure users have everything needed to manage, monitor, maintain and upgrade their HPC cluster. “PSSC Labs is unique in that we manufacture all of our own hardware and develop our own cluster management toolkits in house. While other companies simply cobble together third party hardware and software, PSSC Labs custom builds every HPC cluster to achieve performance and reliability boosts of up to 15%,” said Alex Lesser, Vice President of PSSC Labs.

SPACK: A Package Manager for Supercomputers, Linux, and MacOS

“HPC software is becoming increasingly complex. The space of possible build configurations is combinatorial, and existing package management tools do not handle these complexities well. Because of this, most HPC software is built by hand. This talk introduces “Spack”, an open-source tool for scientific package management which helps developers and cluster administrators avoid having to waste countless hours porting and rebuilding software.” A tutorial video on using Spack is also included.

Intel Xeon Phi Processor Intel AVX-512 Programming in a Nutshell

In this special guest feature, James Reinders discusses the use of the Intel® Advanced Vector Instructions (Intel® AVX-512), covering a variety of vectorization techniques available for accessing the performance of Intel AVX-512.

Intel® VTune™ Amplifier Turns Raw Profiling Data Into Performance Insights

Discovering where the performance bottlenecks are and knowing what to do about it can be a mysterious and complex art, needing some very sophisticated performance analysis tools for success. That’s where Intel® VTune™ Amplifier XE 2017, part of Intel Parallel Studio XE, comes in.

dCUDA: Distributed GPU Computing with Hardware Overlap

“Over the last decade, CUDA and the underlying GPU hardware architecture have continuously gained popularity in various high-performance computing application domains such as climate modeling, computational chemistry, or machine learning. Despite this popularity, we lack a single coherent programming model for GPU clusters. We therefore introduce the dCUDA programming model, which implements device-side remote memory access.”

Managing Node Configuration with 1000s of Nodes

Ira Weiny from Intel presented this talk at the OpenFabrics Workshop. “Individual node configuration when managing 1000s or 10s of thousands of nodes in a cluster can be a daunting challenge. Two key daemons are now part of the rdma-core package which aid the management of individual nodes in a large fabric: IBACM and rdma-ndd.”

Max Planck Institute Adopts ScaleMP Cluster Virtualization Software

“We selected vSMP Foundation from ScaleMP as the sole available solution turning cluster hardware into an SMP; as a single machine, it allows us to distribute the jobs without using any batch/queuing system, and we only need to manage one logical entity rather than a collection of nodes,” said Dr. Dirk Bockelmann of the department NMR-based Structural Biology at the Max Planck Institute for Biophysical Chemistry. “We are looking forward to putting vSMP Foundation to work for our Scientists.”

Adaptive Computing Releases Moab HPC Suite 9.1.1

Today Adaptive Computing announces the latest release of Moab HPC Suite and related add-ons. The new release extends ease-of-use submission and workload management to new platforms by delivering a release of Viewpoint that can now work directly with either Torque or Slurm. Because of this “Open Platform” extension, other related products now automatically work with either resource manager, including remote visualization, submissions of high throughput workloads (Nitro enables tens of thousands to millions of tasks), and use of Adaptive Computing’s new Reporting & Analytics solution.