MailChimp Developer

Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


XSEDE & UC Berkeley Offer Online Parallel Computing Course

class

The XSEDE project and the University of California, Berkeley are offering an online course on parallel computing for graduate students and advanced undergraduates.

Experts Focus on Code Efficiency at ISC 2015

DK Panda from Ohio State University conducts a tutorial at ISC 2015.

In this special guest feature, Robert Roe from Scientific Computing World explores the efforts made by top HPC centers to scale software codes to the extreme levels necessary for exascale computing. “The speed with which supercomputers process useful applications is more important than rankings on the TOP500, experts told the ISC High Performance Conference in Frankfurt last month.”

Titan Supercomputer Powers the Future of Forecasting

ecmwf

Knowing how the weather will behave in the near future is indispensable for countless human endeavors. Now, researchers at ECMWF are leveraging the computational power of the Titan supercomputer at Oak Ridge to improve weather forecasting.

Training the Next Generation of Code Developers for HPC – Part 2

Rob Farber gives a tutorial at SC14

This is the second article in a two-part series about the challenges facing the HPC community in training people to write code and develop algorithms for current and future, massively-parallel, massive-scale HPC systems.

Colfax to Offer Free Online Training for Intel Code Modernization

how

Today Colfax International announced free online workshops on parallel programming and optimization for Intel architecture, including Intel Xeon processors and Intel Xeon Phi coprocessors. “The Hands-on Workshop (HOW) series will introduce best practices to researchers and developers to efficiently extract maximum performance out of modern parallel processors, achieving shorter time to solution, higher research productivity, and future-proof design.”

Training the Next Generation of Code Developers for HPC

Rob Farber

This is the first article in a two-part series by Rob Farber about the challenges facing the HPC community in training people to write code and develop algorithms for current and future, massively-parallel, massive-scale HPC systems.

Tuning Bioinformatics Codes with Allinea Performance Reports

Mark O'Connor (left) and Rich Brueckner (right)

In this video from ISC 2015, Mark O’Connor from Allinea demonstrates how the company’s Performance Reports tool enables coders to speed up the Discovar bioinformatics code. “Allinea Performance Reports are the most effective way to characterize and understand the performance of HPC application runs. One single-page HTML report elegantly answers a range of vital questions for any HPC installation.”

Introducing the Intel Modern Code Community

Scott Apeland, Director, Intel Developer Program

“Building on the success of the Intel Parallel Computing Centers, Intel is announcing the Intel Modern Code Developer Community to help HPC developers to code for maximum performance on current and future hardware. Targeting over 400,000 HPC-focused developers and partners, the program brings tools, training, knowledge and support to developers worldwide by offering access to a network of elite experts in parallelism and HPC. The broader developer community can now gain the skills needed to unlock the full potential of Intel hardware and enable the next decade of discovery.”

Why Modernize Code?

Intel_HPC_CodeM_IDZweb_HERO_375x420

In order to speed up applications, a developer must learn to take advantage of the multiple threads, cores and sockets found on a single server or on a cluster. Just hoping for a faster CPU anymore won’t cut it.

Concurrent Kernel Offloading

phi

“The combination of using a host cpu such as an Intel Xeon combined with a dedicated coprocessor such as the Intel Xeon Phi coprocessor has been shown in many cases to improve the performance of an application by significant amounts. When the datasets are large enough, it makes sense to offload as much of the workload as possible. But is this the case when the potential offload data sets are not as large?”