MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Video: Optimizing Applications for the CORI Supercomputer at NERSC

In this video from SC15, NERSC shares its experience on optimizing applications to run on the new Intel Xeon Phi processors (code name Knights Landing) that will empower the Cori supercomputer by the summer of 2016. “A key goal of the Cori Phase 1 system is to support the increasingly data-intensive computing needs of NERSC users. Toward this end, Phase 1 of Cori will feature more than 1,400 Intel Haswell compute nodes, each with 128 gigabytes of memory per node. The system will provide about the same sustained application performance as NERSC’s Hopper system, which will be retired later this year. The Cori interconnect will have a dragonfly topology based on the Aries interconnect, identical to NERSC’s Edison system.”

Podcast: Molly Rector from DDN on the Changing Face of HPC Storage

In this Graybeards Podcast, Molly Rector from DDN describes how HPC storage technologies are mainstreaming into the enterprise space. “In HPC there are 1000s of compute cores that are crunching on PB of data. For Oil&Gas companies, it’s seismic and wellhead analysis; with bio-informatics it’s genomic/proteomic analysis; and with financial services, it’s economic modeling/backtesting trading strategies. For today’s enterprises such as retailers, it’s customer activity analytics; for manufacturers, it’s machine sensor/log analysis; and for banks/financial institutions, it’s credit/financial viability assessments. Enterprise IT might not have 1000s of cores at their disposal just yet, but it’s not far off. Molly thinks one way to help enterprise IT is to provide a SuperComputer as a service (ScaaS?) offering, where top 10 supercomputers can be rented out by the hour, sort of like a supercomputing compute/data cloud.”

MultiLevel Parallelism with Intel Xeon Phi

“The combination of using both MPI and OpenMP is a topic that has been explored by many developers in order to determine the most optimum solution. Whether to use OpenMP for outer loops and MPI within, or by creating separate MPI processes and using OpenMP within can lead to various levels of performance. In most cases of determining which method will yield the best results will involve a deep understanding of the application, and not just rearranging directives.”

Ingram Micro Artizen Soutions for HPC

In this video from the GPU Technology Conference, Rick Young from Ingram Micro describes the company’s Artizen HPC solutions. “Available now to channel partners in the U.S., the distributor’s new and exclusive line of Artizen High Performance Computing (HPC) offerings include turnkey high performance servers, ultimate workstations, and customizable supercomputing clusters, as well as computing integration and software installation services.”

Monitoring and Management Interfaces for GPU Devices in a Cluster Environment

“This presentation will provide an overview of the Nvidia Tesla Deployment Kit (TDK) from a user and a system administrator point of view. TDL contains Nvidia Management Library (NVML) and nvidia-healthmon–a tool for detecting and troubleshooting known GPU issues in a cluster environment. Usage models within a cluster environment will be presented along with a discussion on how existing resource management tools can be extended to improve allocation and accounting of GPU resources.”

Video: ClusterStor Update

“HPC storage solutions and futures continue to evolve as growth and performance requirements permeate every HPC market segment. Torben discusses these challenges and how the company’s storage solutions are addressing these shifting needs with new developments around disk drives, RAID, CIFS, security, small file handling, and other related technologies.”

Managing the GPUs of Your Cluster in a Flexible Way with rCUDA

In this talk, we introduce the rCUDA remote GPU virtualization framework, which has been shown to be the only one that supports the most recent CUDA versions, in addition to leverage the InfiniBand interconnect for the sake of performance. Furthermore, we also present the last developments within this framework, related with the use of low-power processors, enhanced job schedulers, and virtual machine environments.”

Troy Baer from NICS Wins Lifetime Achievement Adaptie Award

“Congratulations go out to Troy Baer, HPC system administrator at the National Institute for Computational Sciences (NICS), University of Tennessee. Troy Baer’s contributions in scheduling and resource management using Moab have helped Kraken—NICS’ flagship computing resource and the first academic computer to break the petaflop barrier—achieve outstanding 90-95% utilization rates since 2010. Baer’s administrative capabilities enable researchers in numerous scientific arenas, from climate to materials science to astrophysics, to achieve breakthroughs not yet possible on other resources. In November 2012, Baer helped NICS’ Beacon system secure a No. 1 ranking on the Green500 list of energy-efficient supercomputers.”

DK Panda Presents: Programming Models for Exascale Systems

“This talk will focus on programming models and their designs for upcoming exascale systems with millions of processors and accelerators. Current status and future trends of MPI and PGAS (UPC and OpenSHMEM) programming models will be presented. We will discuss challenges in designing runtime environments for these programming models by taking into account support for multi-core, high-performance networks, GPGPUs, Intel MIC, scalable collectives (multi-core-aware, topology-aware, and power-aware), non-blocking collectives using Offload framework, one-sided RMA operations, schemes and architectures for fault-tolerance/fault-resilience.”

High Performance Computing at CSCS

“With around 3.2 billion computer operations (3.2 gigaflops) per watt, the combination of GPUs CPUs makes “Piz Daint” one of the world’s most energy-­efficient supercomputers in the petaflop performance class.”