Sign up for our newsletter and get the latest HPC news and analysis.


Achieving Near-Native GPU Performance in the Cloud

John Paul Walters

“In this session we describe how GPUs can be used within virtual environments with near-native performance. We begin by showing GPU performance across four hypervisors: VMWare ESXi, KVM, Xen, and LXC. After showing that performance characteristics of each platform, we extend the results to the multi-node case with nodes interconnected by QDR InfiniBand. We demonstrate multi-node GPU performance using GPUDirect-enabled MPI, achieving efficiencies of 97-99% of a non-virtualized system.”

Video: Have Hard Disks Joined the Grateful Dead?

esg

Have spinning disk drives joined the Grateful Dead? That is the contention here in this video, where industry analyst Mark Peters from ESG dons his tie-dyed shirt and declares that flash is the new storage of choice in the datacenter.

ANSYS Powers Quantum Computing Engineering at D-Wave

ansys

D-Wave Systems reports that the company is designing and building the world’s most advanced quantum computers with help from engineering simulation solutions from ANSYS. This next generation of supercomputers uses quantum mechanics to massively accelerate computation and has the potential to solve some of the most complex computing problems facing organizations today.

Simulating Global Atmosphere with NICAM on TSUBAME2.5 Using OpenACC

openacc

“OpenACC was applied to the a global high-resolution atmosphere model named NICAM. We executed the dynamical core test without re-writing any specific kernel subroutines for GPU execution. Only 5% of the lines of source code were modified, demonstrating good portability. The results showed that the kernels generated by OpenACC achieved good performance, which was appropriate to the memory performance of GPU, as well as weak scalability. A large-scale simulation was carried out using 2560 GPUs, which achieved 60 TFLOPS.”

Podcast: Arden L. Bement on Blue Waters and the Future of HPC

Arden L. Bement

In this podcast from the 2015 NCSA Blue Waters Symposium, Arden L. Bement discusses the Blue Waters supercomputer and the future of HPC. Formerly Director of the NSF, Bement keynoted the symposium and is currently the Davis A. Ross Distinguished Professor Emeritus and Adjunct Professor of the College of Technology at Purdue University.

Scaling STAR-CCM+ to 55,000 Cores on Hornet

Hornet Supercomputer

Today CD-adapco announced a significant scalability milestone for its STAR-CCM+ CFD software. Optimized over the course of a year in collaboration with HLRS and SICOS BW, STAR-CCM+ was run on the entirety of the 1.045 PetaFlop Hermit cluster, managing to maintain perfect scalability beyond 55,000 cores.

Attacking HIV with Titan and Blue Waters

hiv

“The highly parallel molecular dynamics code NAMD was was one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007, and is now used to perform petascale biomolecular simulations, including a 64-million-atom model of the HIV virus capsid, on the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines.”

Interview: Intel Taking Lustre into New Markets

Brent Gorda, GM, Intel High Performance Data Division

“It’s been nearly three years since Intel acquired Whamcloud and its Lustre engineering team. With Intel’s recent announcement that Lustre will power the 2018 Aurora supercomputer at Argonne, we took the opportunity to catch up with Brent Gorda, general manager of Intel High Performance Data Division at Intel Corporation.”

E4-ARKA: ARM64+GPU+IB is Now Here

e4

“E4 Computer Engineering has introduced ARKA, the first server solution based on ARM 64 bit SoC dedicated to HPC. The compute node is boosted by discrete GPU NVIDIA cards K20 with 10Gb ethernet and FDR InfiniBand networks implemented by default. In this presentation, the hardware configuration of the compute node is described in detail. The unique capabilities of the ARM+GPU+IB combination are described, including many synthetic benchmarks and application tests with particular attention to molecular dynamics software.”

Video: HPC Solution Stack on OpenPOWER

openpower

“This demo will show the capability of IBM OpenPOWER that can be the foundation of the complicated High Performance Computing complete solution. From the HPC cluster deployment, job scheduling, system management, application management to the science computing workloads on top of them, all these components can be well constructed on top of IBM OpenPOWER platform with good usability and performance. Also this demo shows the simplicity of migrating a complete x86 based HPC stack to the OpenPOWER platform.”