Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


Addressing Computing Challenges at CERN openlab

In this special guest feature from Scientific Computing World, Robert Roe speaks with Dr Maria Girone, Chief Technology Officer at CERN openlab ahead of her keynote presentation at ISC High Performance. “The challenge of creating the largest particle accelerator is now complete but there is another challenge – harnessing all of the data produced through experimentation. This will become even greater when the ‘high-luminosity’ LHC experiments begin in 2026.”

Abstractions and Directives for Adapting Wavefront Algorithms to Future Architectures

Robert Searles from the University of Delaware gave this talk at PASC18. “Architectures are rapidly evolving, and exascale machines are expected to offer billion-way concurrency. We need to rethink algorithms, languages and programming models among other components in order to migrate large scale applications and explore parallelism on these machines. Although directive-based programming models allow programmers to worry less about programming and more about science, expressing complex parallel patterns in these models can be a daunting task especially when the goal is to match the performance that the hardware platforms can offer.”

IO500 List Showcases World’s Fastest Storage Systems for HPC

In this video from ISC 2018, John Bent and Jay Lofstead describe how the IO500 benchmark measures storage performance in HPC environments. “The IO500 benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. The list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data.”

From Weather Dwarfs to Kilometre-Scale Earth System Simulations

Nils P. Wedi from ECMWF gave this talk at PASC18. “The increasingly large amounts of data being produced b weather and climate simulations and earth system observations is sometimes characterised as a deluge. This deluge of data is both a challenge and an opportunity. The main opportunities are to make use of this wealth of data to 1) improve knowledge by extracting additional knowledge from the data and 2) to improve the quality of the models themselves by analysing the accuracy, or lack thereof, of the resultant simulation data.”

NEC Accelerates Machine Learning with Vector Computing

In this video from ISC 2018, Takeo Hosomi from NEC describes how vector computing can accelerate Machine Learning workloads. “Machine learning is the key technology for data analytics and artificial intelligence. Recent progress in this field opens opportunities for a wide variety of new applications. Our department has been at the forefront of developments in such areas as deep learning, support vector machines and semantic analysis for over a decade. Many of our technologies have been integrated in innovative products and services of NEC.”

How DMTF and Redfish Ease System Administration

In this video from the Dell EMC HPC Community meeting, Alan Sill from Texas Tech University describes how DMTF and the Redfish project will ease system administration for HPC clusters. “DMTF’s Redfish is a standard API designed to deliver simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). An open industry standard specification and schema, Redfish specifies a RESTful interface and utilizes defined JSON payloads – usable by existing client applications and browser-based GUI.”

Massive-Scale Analytics Applied to Real-World Problems

David Bader from Georgia Tech gave this talk at PASC18. “Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms and development of frameworks for solving these real-world problems on high performance computers, and for improved models that capture the noise and bias inherent in the torrential data streams.”

Porting HPC Codes with Directives and OpenACC

In this video from ISC 2018, Michael Wolfe from OpenACC.org describes how scientists can port their code to accelerated computing. “OpenACC is a user-driven directive-based performance-portable parallel programming model designed for scientists and engineers interested in porting their codes to a wide-variety of heterogeneous HPC hardware platforms and architectures with significantly less programming effort than required with a low-level model.”

Why the World is Starting to look like a Giant HPC Cluster

“AI, machine learning, is not a (traditional) HPC workload. However, it takes an HPC machine to do it. If you look at HPC, generally, you take a model or things like that, you turn it into an extraordinarily large amount of data, and then you go find some information for that data. Machine learning, on the other hand, takes an extraordinarily large amount of information and collapses it into an idea or a model.”

NEC Accelerates HPC with Vector Computing at ISC 2018

In this video from ISC 2018, Oliver Tennert from NEC Deutschland GmbH introduces the company’s vector computing technologies for HPC and Machine Learning. “The NEC SX-Aurora TSUBASA is the newest in the line of NEC SX Vector Processors with the worlds highest memory bandwidth. The Processor that is implemented in a PCI-e form factor can be configured in many flexible configurations together with a standard x86 cluster.”