Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Celebrating 20 Years of the OpenMP API

“The first version of the OpenMP application programming interface (API) was published in October 1997. In the 20 years since then, the OpenMP API and the slightly older MPI have become the two stable programming models that high-performance parallel codes rely on. MPI handles the message passing aspects and allows code to scale out to significant numbers of nodes, while the OpenMP API allows programmers to write portable code to exploit the multiple cores and accelerators in modern machines.”

High Performance Big Data Computing Using Harp-DAAL

Harp-DAAL is a framework developed at Indiana University that brings together the capabilities of big data (Hadoop) and techniques that have previously been adopted for high performance computing.  Together, employees can become more productive and gain deeper insights to massive amounts of data.

Let’s Talk Exascale: Making Software Development more Efficient

In this episode of Let’s Talk Exascale, Mike Heroux from Sandia National Labs describes the Exascale Computing Project’s Software Development Kit, an organizational approach to reduce the complexity of the project management of ECP software technology. “My hope is that as we create these SDKs and bring these independently developed products together under a collaborative umbrella, that instead of saying that each of these individual products is available independently, we can start to say that an SDK is available.”

ArrayFire Releases v3.6 Parallel Libraries

Today ArrayFire announced the release of ArrayFire v3.6, the company’s open source library of parallel computing functions supporting CUDA, OpenCL, and CPU devices. This new version of ArrayFire includes several new features that improve the performance and usability for applications in machine learning, computer vision, signal processing, statistics, finance, and more. “We use ArrayFire to run the low level parallel computing layer of SDL Neural Machine Translation Products,” said William Tambellini, Senior Software Developer at SDL. “ArrayFire flexibility, robustness and dedicated support makes it a powerful tool to support the development of Deep Learning Applications.”

Pawsey Supercomputing Centre Hosts GPU Hackathon this week

Australia’s Pawsey Supercomputing Centre is hosting a GPU Hackathon this week in Perth, Australia. “The GPU Hackathon is a free event taking place at Esplanade Hotel in Fremantle, from Monday 16 April to Friday 20 April. Six teams from Australia, the United States, and Europe, are gathering in Perth for this 5-day event to adapt their applications for GPU architectures.”

Atos Quantum Learning Machine can now simulate real Qubits

Researchers at the Atos Quantum Laboratory have successfully modeled ‘quantum noise’ and as a result, simulation is more realistic than ever before, and is closer to fulfilling researchers’ requirements. “We are thrilled by the remarkable progress that the Atos Quantum program has delivered as of today,” said Thierry Breton, Chairman and CEO of Atos.

Barbara Chapman Joins Board of OpenMP ARB

Today the OpenMP Architecture Review Board (ARB) announced the appointment of Barbara Chapman to its Board of Directors. “We are delighted to have Prof. Chapman join the OpenMP Board”, says Partha Tirumalai, chairman of the OpenMP Board of Directors. “Her decades of experience in high-performance computing and education will enhance the value OpenMP brings to users all over the world.”

Let’s Talk Exascale: Developing Low Overhead Communication Libraries

In this episode of Let’s Talk Exascale, Scott Baden of LBNL describes the Pagoda Project, which seeks to develop a lightweight communication and global address space support for exascale applications. “What our project is addressing is how to keep the fixed cost as small as possible, so that cutting-edge irregular algorithms can efficiently move many small pieces of data efficiently.”

Let’s Talk Exascale: Software Ecosystem for High-Performance Numerical Libraries

In this Let’s Talk Exascale podcast, Lois Curfman McInnes from Argonne National Laboratory describes the Extreme-scale Scientific Software Development Kit (xSDK) for ECP, which is working toward a software ecosystem for high-performance numerical libraries. “The project is motivated by the need for next-generation science applications to use and build on diverse software capabilities that are developed by different groups.”

Intel AVX Gives Numerical Computations in Java a Big Boost

Recent Intel® enhancements to Java enable faster and better numerical computing. In particular, the Java Virtual Machine (JVM) now uses the Fused Multiply Add (FMA) instructions on Intel Intel Xeon® PhiTM processors with Advanced Vector Instructions (Intel AVX) to implement the Open JDK9 Math.fma()API. This gives significant performance improvements for matrix multiplications, the most basic computation found in most HPC, Machine Learning, and AI applications.