Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


ISC 2019 Student Cluster Competition: Day-by-Day Drama, Winners Revealed!

In this special guest feature, Dan Olds from OrionX continues his first-hand coverage of the Student Cluster Competition at the recent ISC 2019 conference. “The ISC19 Student Cluster Competition in Frankfurt, Germany had one of the closest and most exciting finishes in cluster competition history. The overall winner was decided by just over two percentage points and the margin between third and fourth place was less than a single percentage point.”

Call for Papers: ISAV 2019 In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization

The ISAV 2019 Workshop has issued its Call for Papers. Held in conjunction with with SC19, the In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization Workshop takes place Nov. 18, 2019 in Denver. “The workshop brings together researchers, developers and practitioners from industry, academia, and government laboratories developing, applying, and deploying in situ methods in extreme-scale, high performance computing. The goal is to present research findings, lessons learned, and insights related to developing and applying in situ methods and infrastructure across a range of science and engineering applications in HPC environments.”

Large-Scale Optimization Strategies for Typical HPC Workloads

Liu Yu from Inspur gave this talk at PASC19. “Ensuring performance of applications running on large-scale clusters is one of the primary focuses in HPC research. In this talk, we will show our strategies on performance analysis and optimization for applications in different fields of research using large-scale HPC clusters.”

HPC Goes Green at Goonhilly Earth Station with Submer Immersive Cooling

“As an innovator in hyper-efficient immersion cooling technology, Submer Technologies is partnering with Goonhilly and 2CRSI to provide HPC solutions with higher performance at less than half the energy consumption of traditional datacenters. A SmartPod immersion cooling system installed on-site will be running CPU and GPU-intensive simulations using HPC servers provided by 2CRSI using the latest Nvidia and AMD chipsets to showcase the future of datacenters during a series of special events over the next several months.”

John Shalf from LBNL on Computing Challenges Beyond Moore’s Law

In this special guest feature from Scientific Computing World, Robert Roe interviews John Shalf from LBNL on the development of digital computing in the post Moore’s law era. “In his keynote speech at the ISC conference in Frankfurt, Shalf described the lab-wide project at Berkeley and the DOE’s efforts to overcome these challenges through the development acceleration of the design of new computing technologies.”

Podcast: Tackling Massive Scientific Challenges with AI/HPC Convergence

In this Chip Chat podcast, Brandon Draeger from Cray describes the unique needs of HPC customers and how new Intel technologies in Cray systems are helping to deliver improved performance and scalability. “More and more, we are seeing the convergence of AI and HPC – users investigating how they can use AI to complement what they are already doing with their HPC workloads. This includes using machine and deep learning to analyze results from a simulation, or using AI techniques to steer where to take a simulation on the fly.”

Podcast: ExaScale is a 4-way Competition

In this podcast, the RadioFree team discusses the 4-way competition for Exascale computing between the US, China, Japan, and Europe. “The European effort is targeting 2 pre-exa installation in the coming months, and 2 actual ExaScale installations in the 2022-2023 timeframe at least one of which will be based on European technology.”

The Challenges of Updating Scientific Codes for New HPC Architectures

In this video from PASC19 in Zurich, Benedikt Riedel from the University of Wisconsin describes the challenges researchers face when it comes to updating their scientific codes for new HPC architectures. After that he describes his work on the IceCube Neutrino Observatory.

Supercomputing Potential Impacts of a Major Quake by Building Location and Size

National lab researchers from Lawrence Livermore and Berkeley Lab are using supercomputers to quantify earthquake hazard and risk across the Bay Area. Their work is focused on the impact of high-frequency ground motion on thousands of representative different-sized buildings spread out across the California region. “While working closely with the NERSC operations team in a simulation last week, we used essentially the entire Cori machine – 8,192 nodes, and 524,288 cores – to execute an unprecedented 5-hertz run of the entire San Francisco Bay Area region for a magnitude 7 Hayward Fault earthquake.”

NEC Embraces Open Source Frameworks for SX-Aurora Vector Computing

In this video from ISC 2019, Dr. Erich Focht from NEC Deutschland GmbH describes how the company is embracing open source frameworks for the SX-Aurora TSUBASA Vector Supercomputer. “Until now, with the existing server processing capabilities, developing complex models on graphical information for AI has consumed significant time and host processor cycles. NEC Laboratories has developed the open-source Frovedis framework over the last 10 years, initially for parallel processing in Supercomputers. Now, its efficiencies have been brought to the scalable SX-Aurora vector processor.”