Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:

Video: Revolution in Computer and Data-enabled Science and Engineering

Ed Seidel from the University of Illinois gave this talk at the 2017 Argonne Training Program on Extreme-Scale Computing. The theme of his talk centers around the need for interdisciplinary research. “Interdisciplinary research (IDR) is a mode of research by teams or individuals that integrates information, data, techniques, tools, perspectives, concepts, and/or theories from two or more disciplines or bodies of specialized knowledge to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline or area of research practice.”

Video: Argonne’s Theta Supercomputer Architecture

Scott Parker gave this talk at the Argonne Training Program on Extreme-Scale Computing. “Designed in collaboration with Intel and Cray, Theta is a 9.65-petaflops system based on the second-generation Intel Xeon Phi processor and Cray’s high-performance computing software stack. Capable of nearly 10 quadrillion calculations per second, Theta will enable researchers to break new ground in scientific investigations that range from modeling the inner workings of the brain to developing new materials for renewable energy applications.”

A Vision for Exascale: Simulation, Data and Learning

Rick Stevens gave this talk at the recent ATPESC training program. “The ATPESC program provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future. As a bridge to that future, this two-week program fills the gap that exists in the training computational scientists typically receive through formal education or other shorter courses.”

OpenHPC: Project Overview and Updates

Karl Schulz from Intel gave this talk at the MVAPICH User Group. “There is a growing sense within the HPC community for the need to have an open community effort to more efficiently build, test, and deliver integrated HPC software components and tools. To address this need, OpenHPC launched as a Linux Foundation collaborative project in 2016 with combined participation from academia, national labs, and industry. The project’s mission is to provide a reference collection of open-source HPC software components and best practices in order to lower barriers to deployment and advance the use of modern HPC methods and tools.”

Video: How MVAPICH & MPI Power Scientific Research

Adam Moody from LLNL presented this talk at the MVAPICH User Group. “High-performance computing is being applied to solve the world’s most daunting problems, including researching climate change, studying fusion physics, and curing cancer. MPI is a key component in this work, and as such, the MVAPICH team plays a critical role in these efforts. In this talk, I will discuss recent science that MVAPICH has enabled and describe future research that is planned. I will detail how the MVAPICH team has responded to address past problems and list the requirements that future work will demand.”

Infinite Memory Engine: HPC in the FLASH Era

In this RichReport slidecast, James Coomer from DDN presents an overview of the Infinite Memory Engine IME. “IME is a scale-out, flash-native, software-defined, storage cache that streamlines the data path for application IO. IME interfaces directly to applications and secures IO via a data path that eliminates file system bottlenecks. With IME, architects can realize true flash-cache economics with a storage architecture that separates capacity from performance.”

3X Performance Boost Using Intel Advisor and Intel Trace Analyzer in Astrophysics Simulations

On today’s processors, it is crucial to both vectorize (using AVX* or SIMD* instructions) and parallelize software to realize the full performance potential of the processor. By optimizing their MHD astrophysics applications with tools from Intel Parallel Studio XE, and running on the latest Intel hardware, the NSU team achieved a performance speed-up of 3X, cutting the standard time for calculating one problem from one week to just two days.

Intel® HPC Orchestrator and OpenHPC: Trends and Directions

Sharing a common architecture, Intel® HPC Orchestrator and OpenHPC are changing the face of HPC by providing a cohesive and comprehensive system software stack. Dr. Robert Wisniewski, Chief Software Architect Extreme Scale Computing at Intel Corporation, discusses the advantages of this approach and how to leverage it to bring together HPC and the cloud.

Intel® Architecture Deployment at Texas Tech University Relies on Intel® HPC Orchestrator

When it came time to perform a substantial upgrade of the TTU IT Division’s High Performance Computing Center at Texas Tech University, the challenge was to easily manage such a large expansion, which would effectively double the center’s parallel computing capacity. Since the existing 10,000+ core system includes several equipment generations, as well as the latest technology from Intel, HPCC staff at TTU chose Intel® HPC Orchestrator for the task. In this article, Alan Sill, Senior Director of the TTU HPCC, explains why.

Reliability, Scalability and Performance – the Impact of Intel HPC Orchestrator

When it comes to getting the performance out of your HPC system, it’s the small things that count. David Lombard, Sr. Principal Engineer at Intel Corporation, explains. ” Intel HPC Orchestrator encapsulates the important tradeoffs, and pays attention to the small details that can greatly impact how well the underlying features of the hardware are leveraged to deliver better performance and scalability.”