Training the Next Generation of Code Developers for HPC

This is the first article in a two-part series by Rob Farber about the challenges facing the HPC community in training people to write code and develop algorithms for current and future, massively-parallel, massive-scale HPC systems.

Tuning Bioinformatics Codes with Allinea Performance Reports

In this video from ISC 2015, Mark O’Connor from Allinea demonstrates how the company’s Performance Reports tool enables coders to speed up the Discovar bioinformatics code. “Allinea Performance Reports are the most effective way to characterize and understand the performance of HPC application runs. One single-page HTML report elegantly answers a range of vital questions for any HPC installation.”

Introducing the Intel Modern Code Community

“Building on the success of the Intel Parallel Computing Centers, Intel is announcing the Intel Modern Code Developer Community to help HPC developers to code for maximum performance on current and future hardware. Targeting over 400,000 HPC-focused developers and partners, the program brings tools, training, knowledge and support to developers worldwide by offering access to a network of elite experts in parallelism and HPC. The broader developer community can now gain the skills needed to unlock the full potential of Intel hardware and enable the next decade of discovery.”

Why Modernize Code?

In order to speed up applications, a developer must learn to take advantage of the multiple threads, cores and sockets found on a single server or on a cluster. Just hoping for a faster CPU anymore won’t cut it.

Concurrent Kernel Offloading

“The combination of using a host cpu such as an Intel Xeon combined with a dedicated coprocessor such as the Intel Xeon Phi coprocessor has been shown in many cases to improve the performance of an application by significant amounts. When the datasets are large enough, it makes sense to offload as much of the workload as possible. But is this the case when the potential offload data sets are not as large?”

Why Hardware is Leaving Software Behind

In the first report from last week’s PRACEdays15 conference in Dublin, Tom Wilkie from Scientific Computing World considers why so much Exascale software will be open source and why engineers are not using parallel programs.

Fortran Still Going Strong

Fortran still going strong. NERSC estimates that over half the hours on their systems are used by Fortran codes. This is quite amazing, given that Fortran first appeared about 60 years ago.

NAG Library adds New Algorithms for Application Developers

Today the Numerical Algorithms Group (NAG) released their latest NAG Library including over 80 new mathematical and statistical algorithms.

Interview: AutoTune – Automated Optimization and Tuning

The main goal of AutoTune is the automatic optimization of applications in the area of HPC, targeting both performance optimization and energy efficiency. In this interview, Michael Gerndt from the Technische Universitaet Muenchen tells us more about the project.

Fault Trees

As datasets and simulations continue to increase in size and complexity, interactivity with the data should be maintained. It is important to understand how SIMD parallelism can be used when evaluating large fault tree expressions on a large volume of input.